[openstack-community] Internal Software Testing Cloud using sparehardware

Frank Wilson fajwilson at gmail.com
Wed Apr 16 06:46:56 UTC 2014


Hi Marko,

Thanks, you helped me persevere :) . I eventually worked out a
solution. I used a 'flat' network for the routable network and GRE
tunnels for the private networks. The other crucial thing was to
regard my cluster-facing network segment as the 'external' network
whereas I might have normally considered that to be an 'internal'
network.

I had read several things that said that you shouldn't deploy
Openstack without getting tonnes of people involved, so I was starting
to think this applied to what I was doing. Thankfully this didn't turn
out to be the case!

Frank


On 14 April 2014 08:35, Marko Sluga <marko.sluga at chs.si> wrote:
> Hello Frank,
>
> What you are trying to do is one of the best case scenarios that
> OpenStack was designed for :)
>
> There's no one simple definitive answer to why you are unable to achieve
> your goal, since you could. I'd recommend you to either take a deeper
> look in to configuring networking on openstack or if you need to save
> some time just use one of the installable openstack deployments like
> Mirantis Fuel, Rackspace Private Cloud, Piston Cloud or others
> https://wiki.openstack.org/wiki/Get_OpenStack#Commercial_Distributions
>
> Regards,
>
> Marko
>
> Marko Sluga CHS d.o.o. Address: Tehnoloski Park 18 | Ljubljana | SI 1000 | Slovenia Telephone: +386 1 475 95 28 | Mobile: +386 40 84 44 04 | Fax: +386 1 475 95 01 email: marko.sluga at chs.si | www: http://www.chsitipo.si
>
> To elektronsko sporocilo je namenjeno zgolj naslovniku(om) in lahko vsebuje zaupne informacije. V primeru da ste pomotoma prejeli to sporocilo oz. ni namenjena vam, ga nimate pravice uporabljati, kopirati ali preposiljati; v tem primeru prav tako ne odpirajte priponk in elektronsko sporocilo nemudoma izbrisite, ter o tem obvestite posiljatelja. Hvala!
>
> This e-mail is intended solely for the addressee(s) and may contain privileged and/or confidential information. If you have received this e-mail in error or are not the intended recipient, you may not use, copy, disseminate or distribute it; do not open any attachments, please delete it immediately from your system and notify the sender promptly by e-mail that you have done so. ?Thank you!
>
>
>
> Disclaimer added by CodeTwo Exchange Rules
> http://www.codetwo.com
>
> -----Original Message-----
> From: Frank Wilson [mailto:fajwilson at gmail.com]
> Sent: Sunday, April 13, 2014 8:17 PM
> To: community at lists.openstack.org
> Subject: [openstack-community] Internal Software Testing Cloud using
> sparehardware
>
> Hi,
>
> I have been trying to get networking working in a 'simple' internal
> cloud for a couple of months now and I am beginning to give up.
>
> This internal cloud would be used for software testing distributed
> systems. There are no external users, no multi tenancy.
>
> Basically I have 4 spare mounted machines, nothing special.
>
> * Two that don't support hardware virtualisation. These would make good
> controllers / LXC compute nodes.
> * Two that do support hardware virtualisation. So I was planning to use
> KVM here.
> * One managed 1Gbps switch (although I've not made use of the managed
> features yet)
> * One unmanaged 100Mbs switch (I almost want to throw this way)
> * Each machine has two network ports, one internal and one external.
> * I don't have control over the gateway router in the external LAN that
> the machines are connected to
>
> Basically what I'd like to do is have a multi (compute) host cloud that
> supports VMs with two interfaces, one public interface with a routable
> ip (on the private LAN, but outside the cloud) and one private (only
> routable within the cloud). The attractive thing about this setup is
> from the point of the view of the software running in the cloud it
> mimics the basic setup in public clouds. So if we needed to scale up we
> could point our scripts to a different cloud and still take advantage of
> low traffic costs on their 'internal' networks.
>
> Its the networking that is the major problem for me. Not really knowing
> what networking daemon was necessary, I started out with nova-network.
> This almost worked but it was hard to support two guest networks. It
> might have worked if it were possible to run two dhcp servers on one
> bridge (a limitation of nova-network daemon caused the second dhcp
> server to overwrite the config of the first!). Another way it might have
> worked would have been if linux bridge let you connect a real port to
> two bridges or bridges to one another, but it doesn't.
>
> So then I tried neutron but the guides that I found were vague and had
> surprising hardware requirements, like
>
> * Need a managed switch (in addition to OVS!)
> * Need an external router (disappointing given that nova network had a
> software router on each compute node!)
>
> These requirements seemed to be because of the extra security needed for
> multi-tenancy that were not relevant to my use case. But after having
> tried many different permutations of settings in neutron I can't see a
> way forward.
>
> Is what I am doing impossible?
>
> Frank
>
> _______________________________________________
> Community mailing list
> Community at lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/community
>



More information about the Community mailing list