[OpenStack Foundation] [openstack-dev] [board][tc][all] One Platform – Containers/Bare Metal? (Re: Board of Directors Meeting)

Joshua Harlow harlowja at fastmail.com
Wed Apr 13 05:43:02 UTC 2016


Sure, so that helps, except it still has the issue of bumping up against 
the mismatch of the API(s) of nova. This is why I'd rather have a 
template kind of format (as say the input API) that allows for 
(optionally) expressing such container specific capabilities/constraints.

Then some project that can understand that template /format can if 
needed talk to a COE (or similar project) to translate that template 
'segment' into a realized entity using the capabilities/constraints that 
the template specified.

Overall it starts to feel like maybe it is time to change the upper and 
lower systems and shake things up a little ;)

Peng Zhao wrote:
> I'd take the idea further. Imagine a typical Heat template, what you
> need to do is:
>
> - replace the VM id with Docker image id
> - nothing else
> - run the script with a normal heat engine
> - the entire stack gets deployed in seconds
>
> Done!
>
> Well, that sounds like nova-docker. What about cinder and neutron? They
> don't work well with Linux container! The answer is Hypernova
> (https://github.com/hyperhq/hypernova) or Intel ClearContainer, seamless
> integration with most OpenStack components.
>
> Summary: minimal changes to interface and upper systems, much smaller
> image and much better developer workflow.
>
> Peng
>
> -----------------------------------------------------
>      Hyper_ Secure Container Cloud
>
>
>
> On Wed, Apr 13, 2016 5:23 AM, Joshua Harlow harlowja at fastmail.com
> <mailto:harlowja at fastmail.com> wrote:
>
>     __ Fox, Kevin M wrote: > I think part of the problem is containers
>     are mostly orthogonal to vms/bare metal. Containers are a package
>     for a single service. Multiple can run on a single vm/bare metal
>     host. Orchestration like Kubernetes comes in to turn a pool of
>     vm's/bare metal into a system that can easily run multiple
>     containers. > Is the orthogonal part a problem because we have made
>     it so or is it just how it really is? Brainstorming starts here:
>     Imagine a descriptor language like (which I stole from
>     https://review.openstack.org/#/c/210549 and modified): ---
>     components: - label: frontend count: 5 image: ubuntu_vanilla
>     requirements: high memory, low disk stateless: true - label:
>     database count: 3 image: ubuntu_vanilla requirements: high memory,
>     high disk stateless: false - label: memcache count: 3 image:
>     debian-squeeze requirements: high memory, no disk stateless: true -
>     label: zookeeper count: 3 image: debian-squeeze requirements: high
>     memory, medium disk stateless: false backend: VM networks: - label:
>     frontend_net flavor: "public network" associated_with: - frontend -
>     label: database_net flavor: high bandwidth associated_with: -
>     database - label: backend_net flavor: high bandwidth and low latency
>     associated_with: - zookeeper - memchache constraints: - ref:
>     container_only params: - frontend - ref: no_colocated params: -
>     database - frontend - ref: spread params: - database - ref:
>     no_colocated params: - database - frontend - ref: spread params: -
>     memcache - ref: spread params: - zookeeper - ref: isolated_network
>     params: - frontend_net - database_net - backend_net ... Now nothing
>     in the above is about container, or baremetal or vms, (although a
>     'advanced' constraint can be that a component must be on a
>     container, and it must say be deployed via docker image XYZ...);
>     instead it's just about the constraints that a user has on there
>     deployment and the components associated with it. It can be left up
>     to some consuming project of that format to decide how to turn that
>     desired description into an actual description (aka a full expanding
>     of that format into an actual deployment plan), possibly say by
>     optimizing for density (packing as many things container) or
>     optimizing for security (by using VMs) or optimizing for performance
>     (by using bare-metal). > So, rather then concern itself with
>     supporting launching through a COE and through Nova, which are two
>     totally different code paths, OpenStack advanced services like Trove
>     could just use a Magnum COE and have a UI that asks which existing
>     Magnum COE to launch in, or alternately kick off the "Launch new
>     Magnum COE" workflow in horizon, then follow up with the Trove
>     launch workflow. Trove then would support being able to use
>     containers, users could potentially pack more containers onto their
>     vm's then just Trove, and it still would work with both Bare Metal
>     and VM's the same way since Magnum can launch on either. I'm afraid
>     supporting both containers and non container deployment with Trove
>     will be a large effort with very little code sharing. It may be
>     easiest to have a flag version where non container deployments are
>     upgraded to containers then non container support is dropped. > Sure
>     trove seems like it would be a consumer of whatever interprets that
>     format, just like many other consumers could be (with the special
>     case that trove creates such a format on-behalf of some other
>     consumer, aka the trove user). > As for the app-catalog use case,
>     the app-catalog project (http://apps.openstack.org) is working on
>     some of that. > > Thanks, > Kevin >
>     ________________________________________ > From: Joshua Harlow
>     [harlowja at fastmail.com] > Sent: Tuesday, April 12, 2016 12:16 PM >
>     To: Flavio Percoco; OpenStack Development Mailing List (not for
>     usage questions) > Cc: foundation at lists.openstack.org > Subject: Re:
>     [openstack-dev] [OpenStack Foundation] [board][tc][all] One Platform
>     – Containers/Bare Metal? (Re: Board of Directors Meeting) > > Flavio
>     Percoco wrote: >> On 11/04/16 18:05 +0000, Amrith Kumar wrote: >>>
>     Adrian, thx for your detailed mail. >>> >>> >>> >>> Yes, I was
>     hopeful of a silver bullet and as we’ve discussed before (I >>>
>     think it >>> was Vancouver), there’s likely no silver bullet in this
>     area. After that >>> conversation, and some further experimentation,
>     I found that even if >>> Trove had >>> access to a single Compute
>     API, there were other significant >>> complications >>> further down
>     the road, and I didn’t pursue the project further at the >>> time.
>      >>> >> Adrian, Amrith, >> >> I've spent enough time researching on
>     this area during the last month >> and my >> conclusion is pretty
>     much the above. There's no silver bullet in this >> area and >> I'd
>     argue there shouldn't be one. Containers, bare metal and VMs differ
>      >> in such >> a way (feature-wise) that it'd not be good, as far as
>     deploying >> databases goes, >> for there to be one compute API.
>     Containers allow for a different >> deployment >> architecture than
>     VMs and so does bare metal. > > Just some thoughts from me, but why
>     focus on the > compute/container/baremetal API at all? > > I'd
>     almost like a way that just describes how my app should be >
>     interconnected, what is required to get it going, and the features >
>     and/or scheduling requirements for different parts of those app. > >
>     To me it feels like this isn't a compute API or really a heat API
>     but > something else. Maybe it's closer to the docker compose
>     API/template > format or something like it. > > Perhaps such a thing
>     needs a new project. I'm not sure, but it does feel > like that as
>     developers we should be able to make such a thing that > still
>     exposes the more advanced functionality of the underlying API so >
>     that it can be used if really needed... > > Maybe this is similar to
>     an app-catalog, but that doesn't quite feel > like it's the right
>     thing either so maybe somewhere in between... > > IMHO I'd be nice
>     to have a unified story around what this thing is, so > that we as a
>     community can drive (as a single group) toward that, maybe > this is
>     where the product working group can help and we as a developer >
>     community can also try to unify behind... > > P.S. name for project
>     should be 'silver' related, ha. > > -Josh > >
>     __________________________________________________________________________
>      > OpenStack Development Mailing List (not for usage questions) >
>     Unsubscribe:
>     OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >
>     http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >
>      >
>     __________________________________________________________________________
>      > OpenStack Development Mailing List (not for usage questions) >
>     Unsubscribe:
>     OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >
>     http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>     __________________________________________________________________________
>     OpenStack Development Mailing List (not for usage questions)
>     Unsubscribe:
>     OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>     http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



More information about the Foundation mailing list