[openstack-community] Create VM network failed with HTTP 500, need help

董建华 dongjh at nci.com.cn
Wed Oct 23 03:59:32 UTC 2013


Hellow everybody,

I follow the DOC 'openstack-install-guide-apt-havana' to install openstack on Ubuntu 12.04 but  i got an error when creating the VM network, have somebody seen this error ?

root at controller:/etc/nova# nova network-create vmnet --fixed-range-v4=192.168.11.192/26 --bridge-interface=br100 --multi-host=T --gateway=192.168.11.254 --dns1=221.12.1.227 --dns2=221.12.1.228
ERROR: The server has either erred or is incapable of performing the requested operation. (HTTP 500) (Request-ID: req-158a01c3-3189-419e-ae30-9cf3b7c6655e)

root at controller:/etc/nova# nova --debug network-create vmnet --fixed-range-v4=192.168.11.192/26 --bridge-interface=br100 --multi-host=T

REQ: curl -i http://controller:35357/v2.0/tokens -X POST -H "Content-Type: application/json" -H "Accept: application/json" -H "User-Agent: python-novaclient" -d '{"auth": {"tenantName": "admin", "passwordCredentials": {"username": "admin", "password": "openstack"}}}'

INFO (connectionpool:202) Starting new HTTP connection (1): controller
DEBUG (connectionpool:296) "POST /v2.0/tokens HTTP/1.1" 200 3822
RESP: [200] CaseInsensitiveDict({'date': 'Wed, 23 Oct 2013 03:32:28 GMT', 'vary': 'X-Auth-Token', 'content-length': '3822', 'content-type': 'application/json'})
RESP BODY: {"access": {"token": {"issued_at": "2013-10-23T03:32:27.952192", "expires": "2013-10-24T03:32:27Z", "id": "MIIHJgYJKoZIhvcNAQcCoIIHFzCCBxMCAQExCTAHBgUrDgMCGjCCBXwGCSqGSIb3DQEHAaCCBW0EggVpeyJhY2Nlc3MiOiB7InRva2VuIjogeyJpc3N1ZWRfYXQiOiAiMjAxMy0xMC0yM1QwMzozMjoyNy45NTIxOTIiLCAiZXhwaXJlcyI6ICIyMDEzLTEwLTI0VDAzOjMyOjI3WiIsICJpZCI6ICJwbGFjZWhvbGRlciIsICJ0ZW5hbnQiOiB7ImRlc2NyaXB0aW9uIjogIkFkbWluIFRlbmFudCIsICJlbmFibGVkIjogdHJ1ZSwgImlkIjogIjM4MmNlODVlZjAwOTQ4YTNhMTQ0MmU0NGY5ZDAzM2VkIiwgIm5hbWUiOiAiYWRtaW4ifX0sICJzZXJ2aWNlQ2F0YWxvZyI6IFt7ImVuZHBvaW50cyI6IFt7ImFkbWluVVJMIjogImh0dHA6Ly9jb250cm9sbGVyOjkyOTIiLCAicmVnaW9uIjogInJlZ2lvbk9uZSIsICJpbnRlcm5hbFVSTCI6ICJodHRwOi8vY29udHJvbGxlcjo5MjkyIiwgImlkIjogIjYyOGUyYmJkM2YzNjRlNWNhMDM5Zjc2ZjAwMWYyYTUxIiwgInB1YmxpY1VSTCI6ICJodHRwOi8vY29udHJvbGxlcjo5MjkyIn1dLCAiZW5kcG9pbnRzX2xpbmtzIjogW10sICJ0eXBlIjogImltYWdlIiwgIm5hbWUiOiAiZ2xhbmNlIn0sIHsiZW5kcG9pbnRzIjogW3siYWRtaW5VUkwiOiAiaHR0cDovL2NvbnRyb2xsZXI6ODc3NC92Mi8zODJjZTg1ZWYwMDk0OGEzYTE0NDJlNDRmOWQwMzNlZCIsICJyZWdpb24iOiAicmVnaW9uT25lIiwgImludGVybmFsVVJMIjogImh0dHA6Ly9jb250cm9sbGVyOjg3NzQvdjIvMzgyY2U4NWVmMDA5NDhhM2ExNDQyZTQ0ZjlkMDMzZWQiLCAiaWQiOiAiNTM4ZjRlMWNkM2EzNDk4OWE3MzgzOWFjYzMzYWNmNjQiLCAicHVibGljVVJMIjogImh0dHA6Ly9jb250cm9sbGVyOjg3NzQvdjIvMzgyY2U4NWVmMDA5NDhhM2ExNDQyZTQ0ZjlkMDMzZWQifV0sICJlbmRwb2ludHNfbGlua3MiOiBbXSwgInR5cGUiOiAiY29tcHV0ZSIsICJuYW1lIjogIm5vdmEifSwgeyJlbmRwb2ludHMiOiBbeyJhZG1pblVSTCI6ICJodHRwOi8vY29udHJvbGxlcjozNTM1Ny92Mi4wIiwgInJlZ2lvbiI6ICJyZWdpb25PbmUiLCAiaW50ZXJuYWxVUkwiOiAiaHR0cDovL2NvbnRyb2xsZXI6NTAwMC92Mi4wIiwgImlkIjogIjI5MjNiODgwY2FkZDQ2ZjZiODk3NGZhNzlmMjY3Y2ZlIiwgInB1YmxpY1VSTCI6ICJodHRwOi8vY29udHJvbGxlcjo1MDAwL3YyLjAifV0sICJlbmRwb2ludHNfbGlua3MiOiBbXSwgInR5cGUiOiAiaWRlbnRpdHkiLCAibmFtZSI6ICJrZXlzdG9uZSJ9XSwgInVzZXIiOiB7InVzZXJuYW1lIjogImFkbWluIiwgInJvbGVzX2xpbmtzIjogW10sICJpZCI6ICJlZWNiMmI1ZjJiNGY0ODE5ODBhNTU0NmFmNjgwNDgxYyIsICJyb2xlcyI6IFt7Im5hbWUiOiAiYWRtaW4ifV0sICJuYW1lIjogImFkbWluIn0sICJtZXRhZGF0YSI6IHsiaXNfYWRtaW4iOiAwLCAicm9sZXMiOiBbIjRhMGYxMDgyYzE0ODRjMzc4YjQyMjcxOTljM2E3NDJlIl19fX0xggGBMIIBfQIBATBcMFcxCzAJBgNVBAYTAlVTMQ4wDAYDVQQIDAVVbnNldDEOMAwGA1UEBwwFVW5zZXQxDjAMBgNVBAoMBVVuc2V0MRgwFgYDVQQDDA93d3cuZXhhbXBsZS5jb20CAQEwBwYFKw4DAhowDQYJKoZIhvcNAQEBBQAEggEAQ8cWZqOlLmHJdqZdqEOqjPVrIIPnGB33rQ1h4etoNGDJwy2YwX7v8Kzw1agu7I83JKhWltOjBONsQZAegvDCBkNSn91O5o2tDXRGJzyUhje8ryQzBi-TwZjXAGtoT6dBCfjgP6wBrMFkBX7BaHj0P4I+QkGHq9wFiMi2q5gRO4Kuj8kM7PLjuWv1UuJTeZqmxBeQMqbxKSEYY-VztxtVTq95yRrd6rRbJAIuyeimigBSan8E+tFPPUpINecCt8Fhot-4kHE6Ts8o9og-cjuGi5FqKQ7En6XFDrdIPhPT8noe-+QdyeTMSBeeLHxeyIgXy7Da7NE2oei8etTioAu6hQ==", "tenant": {"description": "Admin Tenant", "enabled": true, "id": "382ce85ef00948a3a1442e44f9d033ed", "name": "admin"}}, "serviceCatalog": [{"endpoints": [{"adminURL": "http://controller:9292", "region": "regionOne", "internalURL": "http://controller:9292", "id": "628e2bbd3f364e5ca039f76f001f2a51", "publicURL": "http://controller:9292"}], "endpoints_links": [], "type": "image", "name": "glance"}, {"endpoints": [{"adminURL": "http://controller:8774/v2/382ce85ef00948a3a1442e44f9d033ed", "region": "regionOne", "internalURL": "http://controller:8774/v2/382ce85ef00948a3a1442e44f9d033ed", "id": "538f4e1cd3a34989a73839acc33acf64", "publicURL": "http://controller:8774/v2/382ce85ef00948a3a1442e44f9d033ed"}], "endpoints_links": [], "type": "compute", "name": "nova"}, {"endpoints": [{"adminURL": "http://controller:35357/v2.0", "region": "regionOne", "internalURL": "http://controller:5000/v2.0", "id": "2923b880cadd46f6b8974fa79f267cfe", "publicURL": "http://controller:5000/v2.0"}], "endpoints_links": [], "type": "identity", "name": "keystone"}], "user": {"username": "admin", "roles_links": [], "id": "eecb2b5f2b4f481980a5546af680481c", "roles": [{"name": "admin"}], "name": "admin"}, "metadata": {"is_admin": 0, "roles": ["4a0f1082c1484c378b4227199c3a742e"]}}}

REQ: curl -i http://controller:8774/v2/382ce85ef00948a3a1442e44f9d033ed/os-networks -X POST -H "X-Auth-Project-Id: admin" -H "User-Agent: python-novaclient" -H "Content-Type: application/json" -H "Accept: application/json" -H "X-Auth-Token: MIIHJgYJKoZIhvcNAQcCoIIHFzCCBxMCAQExCTAHBgUrDgMCGjCCBXwGCSqGSIb3DQEHAaCCBW0EggVpeyJhY2Nlc3MiOiB7InRva2VuIjogeyJpc3N1ZWRfYXQiOiAiMjAxMy0xMC0yM1QwMzozMjoyNy45NTIxOTIiLCAiZXhwaXJlcyI6ICIyMDEzLTEwLTI0VDAzOjMyOjI3WiIsICJpZCI6ICJwbGFjZWhvbGRlciIsICJ0ZW5hbnQiOiB7ImRlc2NyaXB0aW9uIjogIkFkbWluIFRlbmFudCIsICJlbmFibGVkIjogdHJ1ZSwgImlkIjogIjM4MmNlODVlZjAwOTQ4YTNhMTQ0MmU0NGY5ZDAzM2VkIiwgIm5hbWUiOiAiYWRtaW4ifX0sICJzZXJ2aWNlQ2F0YWxvZyI6IFt7ImVuZHBvaW50cyI6IFt7ImFkbWluVVJMIjogImh0dHA6Ly9jb250cm9sbGVyOjkyOTIiLCAicmVnaW9uIjogInJlZ2lvbk9uZSIsICJpbnRlcm5hbFVSTCI6ICJodHRwOi8vY29udHJvbGxlcjo5MjkyIiwgImlkIjogIjYyOGUyYmJkM2YzNjRlNWNhMDM5Zjc2ZjAwMWYyYTUxIiwgInB1YmxpY1VSTCI6ICJodHRwOi8vY29udHJvbGxlcjo5MjkyIn1dLCAiZW5kcG9pbnRzX2xpbmtzIjogW10sICJ0eXBlIjogImltYWdlIiwgIm5hbWUiOiAiZ2xhbmNlIn0sIHsiZW5kcG9pbnRzIjogW3siYWRtaW5VUkwiOiAiaHR0cDovL2NvbnRyb2xsZXI6ODc3NC92Mi8zODJjZTg1ZWYwMDk0OGEzYTE0NDJlNDRmOWQwMzNlZCIsICJyZWdpb24iOiAicmVnaW9uT25lIiwgImludGVybmFsVVJMIjogImh0dHA6Ly9jb250cm9sbGVyOjg3NzQvdjIvMzgyY2U4NWVmMDA5NDhhM2ExNDQyZTQ0ZjlkMDMzZWQiLCAiaWQiOiAiNTM4ZjRlMWNkM2EzNDk4OWE3MzgzOWFjYzMzYWNmNjQiLCAicHVibGljVVJMIjogImh0dHA6Ly9jb250cm9sbGVyOjg3NzQvdjIvMzgyY2U4NWVmMDA5NDhhM2ExNDQyZTQ0ZjlkMDMzZWQifV0sICJlbmRwb2ludHNfbGlua3MiOiBbXSwgInR5cGUiOiAiY29tcHV0ZSIsICJuYW1lIjogIm5vdmEifSwgeyJlbmRwb2ludHMiOiBbeyJhZG1pblVSTCI6ICJodHRwOi8vY29udHJvbGxlcjozNTM1Ny92Mi4wIiwgInJlZ2lvbiI6ICJyZWdpb25PbmUiLCAiaW50ZXJuYWxVUkwiOiAiaHR0cDovL2NvbnRyb2xsZXI6NTAwMC92Mi4wIiwgImlkIjogIjI5MjNiODgwY2FkZDQ2ZjZiODk3NGZhNzlmMjY3Y2ZlIiwgInB1YmxpY1VSTCI6ICJodHRwOi8vY29udHJvbGxlcjo1MDAwL3YyLjAifV0sICJlbmRwb2ludHNfbGlua3MiOiBbXSwgInR5cGUiOiAiaWRlbnRpdHkiLCAibmFtZSI6ICJrZXlzdG9uZSJ9XSwgInVzZXIiOiB7InVzZXJuYW1lIjogImFkbWluIiwgInJvbGVzX2xpbmtzIjogW10sICJpZCI6ICJlZWNiMmI1ZjJiNGY0ODE5ODBhNTU0NmFmNjgwNDgxYyIsICJyb2xlcyI6IFt7Im5hbWUiOiAiYWRtaW4ifV0sICJuYW1lIjogImFkbWluIn0sICJtZXRhZGF0YSI6IHsiaXNfYWRtaW4iOiAwLCAicm9sZXMiOiBbIjRhMGYxMDgyYzE0ODRjMzc4YjQyMjcxOTljM2E3NDJlIl19fX0xggGBMIIBfQIBATBcMFcxCzAJBgNVBAYTAlVTMQ4wDAYDVQQIDAVVbnNldDEOMAwGA1UEBwwFVW5zZXQxDjAMBgNVBAoMBVVuc2V0MRgwFgYDVQQDDA93d3cuZXhhbXBsZS5jb20CAQEwBwYFKw4DAhowDQYJKoZIhvcNAQEBBQAEggEAQ8cWZqOlLmHJdqZdqEOqjPVrIIPnGB33rQ1h4etoNGDJwy2YwX7v8Kzw1agu7I83JKhWltOjBONsQZAegvDCBkNSn91O5o2tDXRGJzyUhje8ryQzBi-TwZjXAGtoT6dBCfjgP6wBrMFkBX7BaHj0P4I+QkGHq9wFiMi2q5gRO4Kuj8kM7PLjuWv1UuJTeZqmxBeQMqbxKSEYY-VztxtVTq95yRrd6rRbJAIuyeimigBSan8E+tFPPUpINecCt8Fhot-4kHE6Ts8o9og-cjuGi5FqKQ7En6XFDrdIPhPT8noe-+QdyeTMSBeeLHxeyIgXy7Da7NE2oei8etTioAu6hQ==" -d '{"network": {"cidr": "192.168.11.192/26", "bridge_interface": "br100", "multi_host": true, "label": "vmnet"}}'

INFO (connectionpool:202) Starting new HTTP connection (1): controller
DEBUG (connectionpool:296) "POST /v2/382ce85ef00948a3a1442e44f9d033ed/os-networks HTTP/1.1" 500 128
RESP: [500] CaseInsensitiveDict({'date': 'Wed, 23 Oct 2013 03:33:28 GMT', 'content-length': '128', 'content-type': 'application/json; charset=UTF-8', 'x-compute-request-id': 'req-54233a4b-f8eb-4136-9dec-3527b1a90573'})
RESP BODY: {"computeFault": {"message": "The server has either erred or is incapable of performing the requested operation.", "code": 500}}

DEBUG (shell:740) The server has either erred or is incapable of performing the requested operation. (HTTP 500) (Request-ID: req-54233a4b-f8eb-4136-9dec-3527b1a90573)
Traceback (most recent call last):
File "/usr/lib/python2.7/dist-packages/novaclient/shell.py", line 737, in main
OpenStackComputeShell().main(map(strutils.safe_decode, sys.argv[1:]))
File "/usr/lib/python2.7/dist-packages/novaclient/shell.py", line 673, in main
args.func(self.cs, args)
File "/usr/lib/python2.7/dist-packages/novaclient/v1_1/shell.py", line 905, in do_network_create
cs.networks.create(**kwargs)
File "/usr/lib/python2.7/dist-packages/novaclient/v1_1/networks.py", line 94, in create
return self._create('/os-networks', body, 'network')
File "/usr/lib/python2.7/dist-packages/novaclient/base.py", line 145, in _create
_resp, body = self.api.client.post(url, body=body)
File "/usr/lib/python2.7/dist-packages/novaclient/client.py", line 232, in post
return self._cs_request(url, 'POST', **kwargs)
File "/usr/lib/python2.7/dist-packages/novaclient/client.py", line 213, in _cs_request
**kwargs)
File "/usr/lib/python2.7/dist-packages/novaclient/client.py", line 195, in _time_request
resp, body = self.request(url, method, **kwargs)
File "/usr/lib/python2.7/dist-packages/novaclient/client.py", line 189, in request
raise exceptions.from_response(resp, body, url, method)
ClientException: The server has either erred or is incapable of performing the requested operation. (HTTP 500) (Request-ID: req-54233a4b-f8eb-4136-9dec-3527b1a90573)
ERROR: The server has either erred or is incapable of performing the requested operation. (HTTP 500) (Request-ID: req-54233a4b-f8eb-4136-9dec-3527b1a90573)

root at controller:/etc/nova# nova image-list
+--------------------------------------+--------------+--------+--------+
| ID | Name | Status | Server |
+--------------------------------------+--------------+--------+--------+
| 26fa8866-d075-444d-9844-61b7c22e724b | CirrOS 0.3.1 | ACTIVE | |
+--------------------------------------+--------------+--------+--------+
root at controller:/etc/nova# nova host-list
+------------+-------------+----------+
| host_name | service | zone |
+------------+-------------+----------+
| controller | cert | internal |
| controller | consoleauth | internal |
| controller | scheduler | internal |
| controller | conductor | internal |
+------------+-------------+----------+
root at controller:/etc/nova# nova service-list
+------------------+------------+----------+---------+-------+----------------------------+-----------------+
| Binary | Host | Zone | Status | State | Updated_at | Disabled Reason |
+------------------+------------+----------+---------+-------+----------------------------+-----------------+
| nova-cert | controller | internal | enabled | up | 2013-10-23T03:34:12.000000 | None |
| nova-consoleauth | controller | internal | enabled | up | 2013-10-23T03:34:14.000000 | None |
| nova-scheduler | controller | internal | enabled | up | 2013-10-23T03:34:16.000000 | None |
| nova-conductor | controller | internal | enabled | up | 2013-10-23T03:34:08.000000 | None |
+------------------+------------+----------+---------+-------+----------------------------+-----------------+

root at controller:/etc/nova# keystone endpoint-list
+----------------------------------+-----------+-----------------------------------------+-----------------------------------------+-----------------------------------------+----------------------------------+
|                id                |   region  |                publicurl                |               internalurl               |                 adminurl                |            service_id            |
+----------------------------------+-----------+-----------------------------------------+-----------------------------------------+-----------------------------------------+----------------------------------+
| 0a66d7d8296a4ae8abec433370cb2c16 | regionOne | http://controller:8774/v2/%(tenant_id)s | http://controller:8774/v2/%(tenant_id)s | http://controller:8774/v2/%(tenant_id)s | b743b97c9a1947b085f7c497d746c3d1 |
| 0bba90371145461a91b2f22f8b2dbe29 | regionOne |       http://controller:5000/v2.0       |       http://controller:5000/v2.0       |       http://controller:35357/v2.0      | d1791293d4ba4bdb88b7f47327bb2aaa |
| 761dc1de52c94deda7224e028e5f71ef | regionOne |          http://controller:9292         |          http://controller:9292         |          http://controller:9292         | 720c5da0b7c14200b5818c5c97c5b20c |
+----------------------------------+-----------+-----------------------------------------+-----------------------------------------+-----------------------------------------+----------------------------------+


root at controller:/etc/nova# cat nova.conf
[DEFAULT]
dhcpbridge_flagfile=/etc/nova/nova.conf
dhcpbridge=/usr/bin/nova-dhcpbridge
logdir=/var/log/nova
state_path=/var/lib/nova
lock_path=/var/lock/nova
force_dhcp_release=True
iscsi_helper=tgtadm
libvirt_use_virtio_for_bridges=True
connection_type=libvirt
root_helper=sudo nova-rootwrap /etc/nova/rootwrap.conf
verbose=True
ec2_private_dns_show_ip=True
api_paste_config=/etc/nova/api-paste.ini
volumes_path=/var/lib/nova/volumes
enabled_apis=ec2,osapi_compute,metadata
my_ip=10.10.10.180
vncserver_listen=10.10.10.180
vncserver_proxyclient_address=10.10.10.180
auth_strategy=keystone
rpc_backend=nova.rpc.impl_kombu
rabbit_host=controller
rabbit_port=5672
rabbit_password=guest
[database]
# The SQLAlchemy connection string used to connect to the database
connection=mysql://nova:openstack@controller/nova

root at controller:/etc/nova# cat /etc/nova/api-paste.ini
############
# Metadata #
############
[composite:metadata]
use = egg:Paste#urlmap
/: meta

[pipeline:meta]
pipeline = ec2faultwrap logrequest metaapp

[app:metaapp]
paste.app_factory = nova.api.metadata.handler:MetadataRequestHandler.factory

#######
# EC2 #
#######

[composite:ec2]
use = egg:Paste#urlmap
/services/Cloud: ec2cloud

[composite:ec2cloud]
use = call:nova.api.auth:pipeline_factory
noauth = ec2faultwrap logrequest ec2noauth cloudrequest validator ec2executor
keystone = ec2faultwrap logrequest ec2keystoneauth cloudrequest validator ec2executor

[filter:ec2faultwrap]
paste.filter_factory = nova.api.ec2:FaultWrapper.factory

[filter:logrequest]
paste.filter_factory = nova.api.ec2:RequestLogging.factory

[filter:ec2lockout]
paste.filter_factory = nova.api.ec2:Lockout.factory

[filter:ec2keystoneauth]
paste.filter_factory = nova.api.ec2:EC2KeystoneAuth.factory

[filter:ec2noauth]
paste.filter_factory = nova.api.ec2:NoAuth.factory

[filter:cloudrequest]
controller = nova.api.ec2.cloud.CloudController
paste.filter_factory = nova.api.ec2:Requestify.factory

[filter:authorizer]
paste.filter_factory = nova.api.ec2:Authorizer.factory

[filter:validator]
paste.filter_factory = nova.api.ec2:Validator.factory

[app:ec2executor]
paste.app_factory = nova.api.ec2:Executor.factory

#############
# Openstack #
#############

[composite:osapi_compute]
use = call:nova.api.openstack.urlmap:urlmap_factory
/: oscomputeversions
/v1.1: openstack_compute_api_v2
/v2: openstack_compute_api_v2
/v3: openstack_compute_api_v3

[composite:openstack_compute_api_v2]
use = call:nova.api.auth:pipeline_factory
noauth = faultwrap sizelimit noauth ratelimit osapi_compute_app_v2
keystone = faultwrap sizelimit authtoken keystonecontext ratelimit osapi_compute_app_v2
keystone_nolimit = faultwrap sizelimit authtoken keystonecontext osapi_compute_app_v2

[composite:openstack_compute_api_v3]
use = call:nova.api.auth:pipeline_factory
noauth = faultwrap sizelimit noauth_v3 ratelimit_v3 osapi_compute_app_v3
keystone = faultwrap sizelimit authtoken keystonecontext ratelimit_v3 osapi_compute_app_v3
keystone_nolimit = faultwrap sizelimit authtoken keystonecontext osapi_compute_app_v3

[filter:faultwrap]
paste.filter_factory = nova.api.openstack:FaultWrapper.factory

[filter:noauth]
paste.filter_factory = nova.api.openstack.auth:NoAuthMiddleware.factory

[filter:noauth_v3]
paste.filter_factory = nova.api.openstack.auth:NoAuthMiddlewareV3.factory

[filter:ratelimit]
paste.filter_factory = nova.api.openstack.compute.limits:RateLimitingMiddleware.factory

[filter:ratelimit_v3]
paste.filter_factory = nova.api.openstack.compute.plugins.v3.limits:RateLimitingMiddleware.factory

[filter:sizelimit]
paste.filter_factory = nova.api.sizelimit:RequestBodySizeLimiter.factory

[app:osapi_compute_app_v2]
paste.app_factory = nova.api.openstack.compute:APIRouter.factory

[app:osapi_compute_app_v3]
paste.app_factory = nova.api.openstack.compute:APIRouterV3.factory

[pipeline:oscomputeversions]
pipeline = faultwrap oscomputeversionapp

[app:oscomputeversionapp]
paste.app_factory = nova.api.openstack.compute.versions:Versions.factory

##########
# Shared #
##########

[filter:keystonecontext]
paste.filter_factory = nova.api.auth:NovaKeystoneContext.factory

[filter:authtoken]
paste.filter_factory = keystoneclient.middleware.auth_token:filter_factory
auth_host = controller
auth_port = 35357
auth_protocol = http
admin_tenant_name = service
admin_user = nova
admin_password = openstack
# signing_dir is configurable, but the default behavior of the authtoken
# middleware should be sufficient. It will create a temporary directory
# in the home directory for the user the nova process is running as.
#signing_dir = /var/lib/nova/keystone-signing
# Workaround for https://bugs.launchpad.net/nova/+bug/1154809
auth_version = v2.0

root at controller:/etc/nova# cat /etc/hosts
127.0.0.1 localhost
# 192.168.11.180 controller
# 192.168.11.181 network
# 192.168.11.182 compute1
# 192.168.11.183 compute2
10.10.10.180 controller
10.10.10.181 network
10.10.10.182 compute1
10.10.10.183 compute2

# The following lines are desirable for IPv6 capable hosts
::1 ip6-localhost ip6-loopback
fe00::0 ip6-localnet
ff00::0 ip6-mcastprefix
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters
root at controller:/etc/nova# ip addr
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 16436 qdisc noqueue state UNKNOWN
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP qlen 1000
link/ether b8:ca:3a:ec:7b:ca brd ff:ff:ff:ff:ff:ff
inet 192.168.11.180/24 brd 192.168.11.255 scope global eth0
inet6 fe80::baca:3aff:feec:7bca/64 scope link
valid_lft forever preferred_lft forever
3: eth1: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN qlen 1000
link/ether b8:ca:3a:ec:7b:cc brd ff:ff:ff:ff:ff:ff
4: eth2: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN qlen 1000
link/ether b8:ca:3a:ec:7b:ce brd ff:ff:ff:ff:ff:ff
5: eth3: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN qlen 1000
link/ether b8:ca:3a:ec:7b:d0 brd ff:ff:ff:ff:ff:ff
6: eth4: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP qlen 1000
link/ether 00:0a:f7:24:2d:80 brd ff:ff:ff:ff:ff:ff
inet 10.10.10.180/24 brd 10.10.10.255 scope global eth4
inet6 fe80::20a:f7ff:fe24:2d80/64 scope link
valid_lft forever preferred_lft forever
7: eth5: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN qlen 1000
link/ether 00:0a:f7:24:2d:82 brd ff:ff:ff:ff:ff:ff

BTW, in my environment, eth4 is the internal interface, and eth0 will be the public/bridge interface.

I tried many times, including reinstall the OS and openstack packages, the same error. Can anybody help on it ?
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/community/attachments/20131023/de744eaa/attachment-0001.html>


More information about the Community mailing list