Discussion:
[ovs-dev] OVN and OpenStack Provider Networks
Russell Bryant
2015-06-10 19:13:54 UTC
Permalink
I've been doing some thinking about OpenStack Neutron's provider
networks and how we might be able to support that with OVN as the
backend for Neutron. Here is a start. I'd love to hear what others think.


Provider Networks
=================

OpenStack Neutron currently has a feature referred to as "provider
networks". This is used as a way to define existing physical networks
that you would like to integrate into your environment.

In the simplest case, it can be used in environments where they have no
interest in tenant networks. Instead, they want all VMs hooked up
directly to a pre-defined network in their environment. This use case
is actually popular for private OpenStack deployments.

Neutron's current OVS agent that runs on network nodes and hypervisors
has this configuration entry:

bridge_mappings = physnet1:br-eth1,physnet2:br-eth2[...]

This is used to name your physical networks and the bridge used to
access that physical network from the local node.

Defining a provider network via the Neutron API via the neutron
command looks like this:

$ neutron net-create physnet1 --shared \
--provider:physical_network external \
--provider:network_type flat
A provider network can also be defined with a VLAN id:

$ neutron net-create physnet1-101 --shared \
--provider:physical_network external \
--provider:network_type vlan \
--provider:segmentation_id 101
Provider Networks with OVN
--------------------------

OVN does not currently support this use case, but it's required for
Neutron. Some use cases for provider networks are potentially better
served using OVN gateways. I think the "simple networking without
tenant networks" case is at least one strong use case worth supporting.

One possible implementation would be to have a Neutron agent
that runs parallel to ovn-controller that handles this. It would
perform a subset of what the current Neutron OVS agent does but would
also duplicate a lot of what OVN does. The other option is to have
ovn-controller implement it.

There are significant advantages to implementing this in OVN. First, it
simplifies the deployment. It saves the added complexity of a parallel
control plane for the neutron agents running beside ovn-controller.
Second, much of the flows currently programmed by ovn-controller
would be useful for provider networks. We still want to implement port
security and ACLs (security groups). The major difference is that
unless the packet is dropped by egress port security or ACLs, it should
be sent out the physical network and forwarded by the physical network
infrastructure.

ovn-controller would need the equivalent of the current OVS agent's
bridge_mappings configuration option. ovn-controller is
currently configured by setting values in the local Open_vSwitch
ovsdb database, so a similar configuration entry could be provided
there:

$ ovs-vsctl set open .
external-ids:bridge_mappings=physnet1:br-eth1,physnet2:br-eth2

ovn-controller would expect that the environment was pre-configured
with these bridges. It would create ports on br-int that connect to
each bridge.

These networks also need to be reflected in the OVN databases so that an
OVN logical port can be attached to a provider network. In
OVN_Northbound, we could add a new table called Physical_Switch
that a logical port could be attached to instead of Logical_Switch.
The ``Physical_Switch`` schema could be the same as Logical_Switch
except for the addition of 'type' (flat or vlan) and 'tag' (the VLAN
id for type=vlan)::

"Physical_Switch": {
"columns": {
"name": {"type": "string"},
"type": {"type": "string"},
"tag": {
"type": {"key": {"type": "integer",
"minInteger": 0,
"maxInteger": 4095},
"router_port": {"type": {"key": {"type": "uuid",
"refTable":
"Logical_Router_Port",
"refType": "strong"},
"min": 0, "max": 1}},
"external_ids": {
"type": {"key": "string", "value": "string",
"min": 0, "max": "unlimited"}}}},

Currently, a Logical_Port has a lswitch column. It would also
need a pswitch column for when the port is attached to a
Physical_Switch:

"Logical_Port": {
"columns": {
"lswitch": {"type": {"key": {"type": "uuid",
"refTable": "Logical_Switch",
"refType": "strong"}}},
"pswitch": {"type": {"key": {"type": "uuid",
"refTable":
"Physical_Switch",
"refType": "strong"}}},
...

This would have an impact on the OVN_Southbound database, as well.
No schema changes have been identified. Entries in the Bindings
table would be the same, except that the UUID in the
logical_datapath column would refer to a Physical_Switch instead
of a Logical_Switch. The contents of the Pipeline and the
flows set up by ovn-controller would need to change.

In the case of a Physical_Switch instead of a Logical_Switch,
one major difference is that output to a port that is non-local is just
sent out to the physical network bridge instead of a tunnel.

Another difference is that packets for an unknown destination should
also be sent out to the physical network bridge instead of dropped.


If there is some consensus that supporting something like this makes
sense, I'm happy to take on the next steps, which would include a more
detailed proposal that includes Pipeline and flow details, as well as
the implementation.

Thanks,
--
Russell Bryant
Salvatore Orlando
2015-06-13 22:12:11 UTC
Permalink
Hi Russel,

thanks for sharing these thoughts. I was indeed thinking as well we need to
support this in OVN as the provider networks are a fairly basic neutron
feature - despite being an "extensions".
I have some comments inline. I apologise in advance for their dumbness as
I'm still getting up to speed with OVN architecture & internals.

Salvatore
Post by Russell Bryant
I've been doing some thinking about OpenStack Neutron's provider
networks and how we might be able to support that with OVN as the
backend for Neutron. Here is a start. I'd love to hear what others think.
Provider Networks
=================
OpenStack Neutron currently has a feature referred to as "provider
networks". This is used as a way to define existing physical networks
that you would like to integrate into your environment.
In the simplest case, it can be used in environments where they have no
interest in tenant networks. Instead, they want all VMs hooked up
directly to a pre-defined network in their environment. This use case
is actually popular for private OpenStack deployments.
Neutron's current OVS agent that runs on network nodes and hypervisors
bridge_mappings = physnet1:br-eth1,physnet2:br-eth2[...]
This is used to name your physical networks and the bridge used to
access that physical network from the local node.
Defining a provider network via the Neutron API via the neutron
$ neutron net-create physnet1 --shared \
--provider:physical_network external \
--provider:network_type flat
$ neutron net-create physnet1-101 --shared \
--provider:physical_network external \
--provider:network_type vlan \
--provider:segmentation_id 101
The only pedant nit I have to add here is that the 'shared' setting has
nothing to do with provider networks, but on the other hand it is also true
that is required for supporting the "can't be bothered by tenant networks"
use case.
Post by Russell Bryant
Provider Networks with OVN
--------------------------
OVN does not currently support this use case, but it's required for
Neutron. Some use cases for provider networks are potentially better
served using OVN gateways. I think the "simple networking without
tenant networks" case is at least one strong use case worth supporting.
the difference between the provider network and the L2 gateway is in my
opinion that the former is a mapping between a logical network and a
concrete physical network (I am considering VLANs as 'physical' here for
simplicity), whereas the L2 gateway is a "service" that inserts in the
logical topology to provide the same functionality. In other words with the
provider network abstraction your traffic goes always on your chosen
physical network, with the l2 gateway abstraction your traffic stays in the
tenant network, likely an overlay, unless packets are directed to an
address not in the tenant network, in which case they cross the gateway.

In my opinion it's just too difficult to state which abstraction is better
for given use cases. I'd rather expose both abstractions (the provider
network for now, and the gateway in the future), and let operators choose
the one that suits them better.
Post by Russell Bryant
One possible implementation would be to have a Neutron agent
that runs parallel to ovn-controller that handles this. It would
perform a subset of what the current Neutron OVS agent does but would
also duplicate a lot of what OVN does. The other option is to have
ovn-controller implement it.
There are significant advantages to implementing this in OVN. First, it
simplifies the deployment. It saves the added complexity of a parallel
control plane for the neutron agents running beside ovn-controller.
This alonr would convince me to implement the solution with OVN; for
instance the L3 agents should then be aware of the fact that interfaces
might be plugged in multiple bridges, VIF plugging might also differ and
therefore more work might be needed with port bindings, and finally you'd
have provider networks secured in the "neutron way", whereas tenant
networks would be secured in the "OVN way"
Nevertheless, this might be a case where possibly ML2 could be leveraged
(with or without tweaks) to ensure that the OVS driver implements provider
networks, whereas the OVN driver implements tenant networks.
Post by Russell Bryant
Second, much of the flows currently programmed by ovn-controller
would be useful for provider networks. We still want to implement port
security and ACLs (security groups). The major difference is that
unless the packet is dropped by egress port security or ACLs, it should
be sent out the physical network and forwarded by the physical network
infrastructure.
ovn-controller would need the equivalent of the current OVS agent's
bridge_mappings configuration option. ovn-controller is
currently configured by setting values in the local Open_vSwitch
ovsdb database, so a similar configuration entry could be provided
$ ovs-vsctl set open .
external-ids:bridge_mappings=physnet1:br-eth1,physnet2:br-eth2
ovn-controller would expect that the environment was pre-configured
with these bridges. It would create ports on br-int that connect to
each bridge.
These networks also need to be reflected in the OVN databases so that an
OVN logical port can be attached to a provider network. In
OVN_Northbound, we could add a new table called Physical_Switch
that a logical port could be attached to instead of Logical_Switch.
The provider network is still a logical network. I am not able to see a
reason for having to attach a logical port to a physical switch. Can you
explain?
It seems that you are trying to describe the physical network the logical
network maps to. This makes sense, but since in Neutron then we also have
the "multi-provider" extension, which is a generalization of the provider
network concepts, would it make sense to consider some sort of logical
network bindings? These bindings might express for the time being vlan
mappings, but in the future they could be used to specify, for instance,
VTEPs or the encap type the tenant network implements. I know this might be
nonsense, but at first glance it seems a viable alternative.
Post by Russell Bryant
The ``Physical_Switch`` schema could be the same as Logical_Switch
except for the addition of 'type' (flat or vlan) and 'tag' (the VLAN
"Physical_Switch": {
"columns": {
"name": {"type": "string"},
"type": {"type": "string"},
"tag": {
"type": {"key": {"type": "integer",
"minInteger": 0,
"maxInteger": 4095},
"router_port": {"type": {"key": {"type": "uuid",
"Logical_Router_Port",
"refType": "strong"},
"min": 0, "max": 1}},
"external_ids": {
"type": {"key": "string", "value": "string",
"min": 0, "max": "unlimited"}}}},
It seems that this structure replaces the LogicalSwitch for provider
networks. While this makes sense from one side, it might not play nicely
with Neutron integration. the provider info for a neutron network can
indeed be updated [1], [2] - thus allowing to make a regular tenant network
a provider network. As usual Neutron is inconsistent in this as well: you
can transform a regular network into a provider network, but you cannot do
the opposite.
Post by Russell Bryant
Currently, a Logical_Port has a lswitch column. It would also
need a pswitch column for when the port is attached to a
"Logical_Port": {
"columns": {
"lswitch": {"type": {"key": {"type": "uuid",
"Logical_Switch",
"refType": "strong"}}},
"pswitch": {"type": {"key": {"type": "uuid",
"Physical_Switch",
"refType": "strong"}}},
...
If I get your proposal correct, pswtich and lswtich are mutually exclusive,
or can a port be attached to a pswitch and a lswtich at the same time?
Post by Russell Bryant
This would have an impact on the OVN_Southbound database, as well.
No schema changes have been identified. Entries in the Bindings
table would be the same, except that the UUID in the
logical_datapath column would refer to a Physical_Switch instead
of a Logical_Switch. The contents of the Pipeline and the
flows set up by ovn-controller would need to change.
Would we need chassis entries also for the bridges implementing the mapping
with physical networks, or do we consider them to be outside of the OVN
realm?
Post by Russell Bryant
In the case of a Physical_Switch instead of a Logical_Switch,
one major difference is that output to a port that is non-local is just
sent out to the physical network bridge instead of a tunnel.
Another difference is that packets for an unknown destination should
also be sent out to the physical network bridge instead of dropped.
If there is some consensus that supporting something like this makes
sense, I'm happy to take on the next steps, which would include a more
detailed proposal that includes Pipeline and flow details, as well as
the implementation.
Thanks,
[1]
http://git.openstack.org/cgit/openstack/neutron/tree/neutron/extensions/multiprovidernet.py#n74
[2]
http://git.openstack.org/cgit/openstack/neutron/tree/neutron/extensions/providernet.py#n32
Post by Russell Bryant
--
Russell Bryant
_______________________________________________
dev mailing list
http://openvswitch.org/mailman/listinfo/dev
Kyle Mestery
2015-06-15 02:29:25 UTC
Permalink
Post by Salvatore Orlando
Hi Russel,
thanks for sharing these thoughts. I was indeed thinking as well we need to
support this in OVN as the provider networks are a fairly basic neutron
feature - despite being an "extensions".
I have some comments inline. I apologise in advance for their dumbness as
I'm still getting up to speed with OVN architecture & internals.
Salvatore
Post by Russell Bryant
I've been doing some thinking about OpenStack Neutron's provider
networks and how we might be able to support that with OVN as the
backend for Neutron. Here is a start. I'd love to hear what others
think.
Post by Russell Bryant
Provider Networks
=================
OpenStack Neutron currently has a feature referred to as "provider
networks". This is used as a way to define existing physical networks
that you would like to integrate into your environment.
In the simplest case, it can be used in environments where they have no
interest in tenant networks. Instead, they want all VMs hooked up
directly to a pre-defined network in their environment. This use case
is actually popular for private OpenStack deployments.
Neutron's current OVS agent that runs on network nodes and hypervisors
bridge_mappings = physnet1:br-eth1,physnet2:br-eth2[...]
This is used to name your physical networks and the bridge used to
access that physical network from the local node.
Defining a provider network via the Neutron API via the neutron
$ neutron net-create physnet1 --shared \
--provider:physical_network external \
--provider:network_type flat
$ neutron net-create physnet1-101 --shared \
--provider:physical_network external \
--provider:network_type vlan \
--provider:segmentation_id 101
The only pedant nit I have to add here is that the 'shared' setting has
nothing to do with provider networks, but on the other hand it is also true
that is required for supporting the "can't be bothered by tenant networks"
use case.
Post by Russell Bryant
Provider Networks with OVN
--------------------------
OVN does not currently support this use case, but it's required for
Neutron. Some use cases for provider networks are potentially better
served using OVN gateways. I think the "simple networking without
tenant networks" case is at least one strong use case worth supporting.
the difference between the provider network and the L2 gateway is in my
opinion that the former is a mapping between a logical network and a
concrete physical network (I am considering VLANs as 'physical' here for
simplicity), whereas the L2 gateway is a "service" that inserts in the
logical topology to provide the same functionality. In other words with the
provider network abstraction your traffic goes always on your chosen
physical network, with the l2 gateway abstraction your traffic stays in the
tenant network, likely an overlay, unless packets are directed to an
address not in the tenant network, in which case they cross the gateway.
In my opinion it's just too difficult to state which abstraction is better
for given use cases. I'd rather expose both abstractions (the provider
network for now, and the gateway in the future), and let operators choose
the one that suits them better.
+1, I agree with your comments Salv. I'd also argue that we should likely
do provider networks first, and then followup with L2 gateway support,
given L2 gateway is new in Kilo.
Post by Salvatore Orlando
Post by Russell Bryant
One possible implementation would be to have a Neutron agent
that runs parallel to ovn-controller that handles this. It would
perform a subset of what the current Neutron OVS agent does but would
also duplicate a lot of what OVN does. The other option is to have
ovn-controller implement it.
There are significant advantages to implementing this in OVN. First, it
simplifies the deployment. It saves the added complexity of a parallel
control plane for the neutron agents running beside ovn-controller.
This alonr would convince me to implement the solution with OVN; for
instance the L3 agents should then be aware of the fact that interfaces
might be plugged in multiple bridges, VIF plugging might also differ and
therefore more work might be needed with port bindings, and finally you'd
have provider networks secured in the "neutron way", whereas tenant
networks would be secured in the "OVN way"
Nevertheless, this might be a case where possibly ML2 could be leveraged
(with or without tweaks) to ensure that the OVS driver implements provider
networks, whereas the OVN driver implements tenant networks.
Post by Russell Bryant
Second, much of the flows currently programmed by ovn-controller
would be useful for provider networks. We still want to implement port
security and ACLs (security groups). The major difference is that
unless the packet is dropped by egress port security or ACLs, it should
be sent out the physical network and forwarded by the physical network
infrastructure.
ovn-controller would need the equivalent of the current OVS agent's
bridge_mappings configuration option. ovn-controller is
currently configured by setting values in the local Open_vSwitch
ovsdb database, so a similar configuration entry could be provided
$ ovs-vsctl set open .
external-ids:bridge_mappings=physnet1:br-eth1,physnet2:br-eth2
ovn-controller would expect that the environment was pre-configured
with these bridges. It would create ports on br-int that connect to
each bridge.
These networks also need to be reflected in the OVN databases so that an
OVN logical port can be attached to a provider network. In
OVN_Northbound, we could add a new table called Physical_Switch
that a logical port could be attached to instead of Logical_Switch.
The provider network is still a logical network. I am not able to see a
reason for having to attach a logical port to a physical switch. Can you
explain?
It seems that you are trying to describe the physical network the logical
network maps to. This makes sense, but since in Neutron then we also have
the "multi-provider" extension, which is a generalization of the provider
network concepts, would it make sense to consider some sort of logical
network bindings? These bindings might express for the time being vlan
mappings, but in the future they could be used to specify, for instance,
VTEPs or the encap type the tenant network implements. I know this might be
nonsense, but at first glance it seems a viable alternative.
Post by Russell Bryant
The ``Physical_Switch`` schema could be the same as Logical_Switch
except for the addition of 'type' (flat or vlan) and 'tag' (the VLAN
"Physical_Switch": {
"columns": {
"name": {"type": "string"},
"type": {"type": "string"},
"tag": {
"type": {"key": {"type": "integer",
"minInteger": 0,
"maxInteger": 4095},
"router_port": {"type": {"key": {"type": "uuid",
"Logical_Router_Port",
"refType": "strong"},
"min": 0, "max": 1}},
"external_ids": {
"type": {"key": "string", "value": "string",
"min": 0, "max": "unlimited"}}}},
It seems that this structure replaces the LogicalSwitch for provider
networks. While this makes sense from one side, it might not play nicely
with Neutron integration. the provider info for a neutron network can
indeed be updated [1], [2] - thus allowing to make a regular tenant network
a provider network. As usual Neutron is inconsistent in this as well: you
can transform a regular network into a provider network, but you cannot do
the opposite.
Post by Russell Bryant
Currently, a Logical_Port has a lswitch column. It would also
need a pswitch column for when the port is attached to a
"Logical_Port": {
"columns": {
"lswitch": {"type": {"key": {"type": "uuid",
"Logical_Switch",
"refType": "strong"}}},
"pswitch": {"type": {"key": {"type": "uuid",
"Physical_Switch",
"refType": "strong"}}},
...
If I get your proposal correct, pswtich and lswtich are mutually exclusive,
or can a port be attached to a pswitch and a lswtich at the same time?
Post by Russell Bryant
This would have an impact on the OVN_Southbound database, as well.
No schema changes have been identified. Entries in the Bindings
table would be the same, except that the UUID in the
logical_datapath column would refer to a Physical_Switch instead
of a Logical_Switch. The contents of the Pipeline and the
flows set up by ovn-controller would need to change.
Would we need chassis entries also for the bridges implementing the mapping
with physical networks, or do we consider them to be outside of the OVN
realm?
Post by Russell Bryant
In the case of a Physical_Switch instead of a Logical_Switch,
one major difference is that output to a port that is non-local is just
sent out to the physical network bridge instead of a tunnel.
Another difference is that packets for an unknown destination should
also be sent out to the physical network bridge instead of dropped.
If there is some consensus that supporting something like this makes
sense, I'm happy to take on the next steps, which would include a more
detailed proposal that includes Pipeline and flow details, as well as
the implementation.
Thanks,
[1]
http://git.openstack.org/cgit/openstack/neutron/tree/neutron/extensions/multiprovidernet.py#n74
[2]
http://git.openstack.org/cgit/openstack/neutron/tree/neutron/extensions/providernet.py#n32
Post by Russell Bryant
--
Russell Bryant
_______________________________________________
dev mailing list
http://openvswitch.org/mailman/listinfo/dev
_______________________________________________
dev mailing list
http://openvswitch.org/mailman/listinfo/dev
Russell Bryant
2015-06-22 20:46:06 UTC
Permalink
Post by Russell Bryant
Second, much of the flows currently programmed by ovn-controller
would be useful for provider networks. We still want to implement port
security and ACLs (security groups). The major difference is that
unless the packet is dropped by egress port security or ACLs, it should
be sent out the physical network and forwarded by the physical network
infrastructure.
ovn-controller would need the equivalent of the current OVS agent's
bridge_mappings configuration option. ovn-controller is
currently configured by setting values in the local Open_vSwitch
ovsdb database, so a similar configuration entry could be provided
$ ovs-vsctl set open .
external-ids:bridge_mappings=physnet1:br-eth1,physnet2:br-eth2
ovn-controller would expect that the environment was pre-configured
with these bridges. It would create ports on br-int that connect to
each bridge.
These networks also need to be reflected in the OVN databases so that an
OVN logical port can be attached to a provider network. In
OVN_Northbound, we could add a new table called Physical_Switch
that a logical port could be attached to instead of Logical_Switch.
The provider network is still a logical network. I am not able to see a
reason for having to attach a logical port to a physical switch. Can you
explain?
Yeah, maybe the addition of "Physical_Switch" doesn't make any sense.
Now that I'm coming back to it, I'm not sure that makes as much sense as
just attributes or "Logical_Switch".
Post by Russell Bryant
It seems that you are trying to describe the physical network the
logical network maps to. This makes sense, but since in Neutron then we
also have the "multi-provider" extension, which is a generalization of
the provider network concepts, would it make sense to consider some sort
of logical network bindings? These bindings might express for the time
being vlan mappings, but in the future they could be used to specify,
for instance, VTEPs or the encap type the tenant network implements. I
know this might be nonsense, but at first glance it seems a viable
alternative.
That's interesting. I need to look into what "multi-provider" provides
in more detail.

<snip some additional feedback pointing out that Physical_Switch
probably doesn't make sense>
Post by Russell Bryant
This would have an impact on the OVN_Southbound database, as well.
No schema changes have been identified. Entries in the Bindings
table would be the same, except that the UUID in the
logical_datapath column would refer to a Physical_Switch instead
of a Logical_Switch. The contents of the Pipeline and the
flows set up by ovn-controller would need to change.
Would we need chassis entries also for the bridges implementing the
mapping with physical networks, or do we consider them to be outside of
the OVN realm?
I was proposing that info to be in the Open_vSwitch database, as that's
where the other OVN configuration entries live that are local to the
node. I don't think the bridge mappings are useful anywhere else.
--
Russell Bryant
Ben Pfaff
2015-06-16 00:00:06 UTC
Permalink
Post by Russell Bryant
Provider Networks
=================
OpenStack Neutron currently has a feature referred to as "provider
networks". This is used as a way to define existing physical networks
that you would like to integrate into your environment.
In the simplest case, it can be used in environments where they have no
interest in tenant networks. Instead, they want all VMs hooked up
directly to a pre-defined network in their environment. This use case
is actually popular for private OpenStack deployments.
Neutron's current OVS agent that runs on network nodes and hypervisors
bridge_mappings = physnet1:br-eth1,physnet2:br-eth2[...]
This is used to name your physical networks and the bridge used to
access that physical network from the local node.
Defining a provider network via the Neutron API via the neutron
$ neutron net-create physnet1 --shared \
--provider:physical_network external \
--provider:network_type flat
$ neutron net-create physnet1-101 --shared \
--provider:physical_network external \
--provider:network_type vlan \
--provider:segmentation_id 101
I'm trying to understand what degree of sophistication these provider
networks have. Are they just an interface to a MAC-learning switch
(possibly VLAN-tagged)? Or do provider networks go beyond that, with
the features that one would expect from an OVN logical network
(e.g. port security, ACLs, distributed routing and firewalling, ...)?
Russell Bryant
2015-06-22 18:34:07 UTC
Permalink
(Apologies for the slow follow-up to the responses on this thread. I've
been on vacation.)
Post by Ben Pfaff
Post by Russell Bryant
Provider Networks
=================
OpenStack Neutron currently has a feature referred to as "provider
networks". This is used as a way to define existing physical networks
that you would like to integrate into your environment.
In the simplest case, it can be used in environments where they have no
interest in tenant networks. Instead, they want all VMs hooked up
directly to a pre-defined network in their environment. This use case
is actually popular for private OpenStack deployments.
Neutron's current OVS agent that runs on network nodes and hypervisors
bridge_mappings = physnet1:br-eth1,physnet2:br-eth2[...]
This is used to name your physical networks and the bridge used to
access that physical network from the local node.
Defining a provider network via the Neutron API via the neutron
$ neutron net-create physnet1 --shared \
--provider:physical_network external \
--provider:network_type flat
$ neutron net-create physnet1-101 --shared \
--provider:physical_network external \
--provider:network_type vlan \
--provider:segmentation_id 101
I'm trying to understand what degree of sophistication these provider
networks have. Are they just an interface to a MAC-learning switch
(possibly VLAN-tagged)? Or do provider networks go beyond that, with
the features that one would expect from an OVN logical network
(e.g. port security, ACLs, distributed routing and firewalling, ...)?
(Kyle and Salvatore, please sanity check me on this.)

AFAIK, it is simply an interface to a MAC-learning switch, possibly VLAN
tagged.

It is not expected that a provider network would provide port security
or ACLs (security groups). Those would still be the responsibility of
OVN in this case.

A provider network *may* (but usually does) handle routing and SNAT/DNAT
if necessary. In that case it is managed externally to Neutron. The
only knowledge Neutron has is about the address space on the provider
network, since Neutron provides IPAM. Continuing with the example
above, we can define a subnet on that provider network with:

$ neutron subnet-create provider-101 203.0.113.0/24 \
Post by Ben Pfaff
--enable-dhcp --gateway 203.0.113.1
Neutron would do address assignment and provide the DHCP server for this
network. 203.0.113.1 would be the router.

Neutron (and thus, OVN) would provide port-level firewalls (Neutron
security groups) using OVN ACLs. Additional firewalls (such as at the
router) may exist, but Neutron doesn't need to know about it as it's
expected to be managed externally.
--
Russell Bryant
Ben Pfaff
2015-06-23 20:17:20 UTC
Permalink
Post by Russell Bryant
Post by Ben Pfaff
Post by Russell Bryant
Provider Networks
=================
OpenStack Neutron currently has a feature referred to as "provider
networks". This is used as a way to define existing physical networks
that you would like to integrate into your environment.
In the simplest case, it can be used in environments where they have no
interest in tenant networks. Instead, they want all VMs hooked up
directly to a pre-defined network in their environment. This use case
is actually popular for private OpenStack deployments.
Neutron's current OVS agent that runs on network nodes and hypervisors
bridge_mappings = physnet1:br-eth1,physnet2:br-eth2[...]
This is used to name your physical networks and the bridge used to
access that physical network from the local node.
Defining a provider network via the Neutron API via the neutron
$ neutron net-create physnet1 --shared \
--provider:physical_network external \
--provider:network_type flat
$ neutron net-create physnet1-101 --shared \
--provider:physical_network external \
--provider:network_type vlan \
--provider:segmentation_id 101
I'm trying to understand what degree of sophistication these provider
networks have. Are they just an interface to a MAC-learning switch
(possibly VLAN-tagged)? Or do provider networks go beyond that, with
the features that one would expect from an OVN logical network
(e.g. port security, ACLs, distributed routing and firewalling, ...)?
(Kyle and Salvatore, please sanity check me on this.)
AFAIK, it is simply an interface to a MAC-learning switch, possibly VLAN
tagged.
It is not expected that a provider network would provide port security
or ACLs (security groups). Those would still be the responsibility of
OVN in this case.
A provider network *may* (but usually does) handle routing and SNAT/DNAT
if necessary. In that case it is managed externally to Neutron. The
only knowledge Neutron has is about the address space on the provider
network, since Neutron provides IPAM. Continuing with the example
$ neutron subnet-create provider-101 203.0.113.0/24 \
Post by Ben Pfaff
--enable-dhcp --gateway 203.0.113.1
Neutron would do address assignment and provide the DHCP server for this
network. 203.0.113.1 would be the router.
Neutron (and thus, OVN) would provide port-level firewalls (Neutron
security groups) using OVN ACLs. Additional firewalls (such as at the
router) may exist, but Neutron doesn't need to know about it as it's
expected to be managed externally.
I had to read this several times, but maybe I understand it now. Let me
recap for verification.

A "tenant network" is what OVN calls a logical network. OVN can
construct it as an L2-over-L3 overlay with STT or Geneve or whatever.
Tenant networks can be connected to physical networks via OVN gateways.

A "provider network" is just a physical L2 network (possibly
VLAN-tagged). In such a network, on the sending side, OVN would rely on
normal L2 switching for packets to reach their destinations, and on the
receiving side, OVN would not have a reliable way to determine the
source of a packet (it would have to infer it from the source MAC). Is
that accurate?
Russell Bryant
2015-06-23 20:54:20 UTC
Permalink
Post by Ben Pfaff
Post by Russell Bryant
Post by Ben Pfaff
Post by Russell Bryant
Provider Networks
=================
OpenStack Neutron currently has a feature referred to as "provider
networks". This is used as a way to define existing physical networks
that you would like to integrate into your environment.
In the simplest case, it can be used in environments where they have no
interest in tenant networks. Instead, they want all VMs hooked up
directly to a pre-defined network in their environment. This use case
is actually popular for private OpenStack deployments.
Neutron's current OVS agent that runs on network nodes and hypervisors
bridge_mappings = physnet1:br-eth1,physnet2:br-eth2[...]
This is used to name your physical networks and the bridge used to
access that physical network from the local node.
Defining a provider network via the Neutron API via the neutron
$ neutron net-create physnet1 --shared \
--provider:physical_network external \
--provider:network_type flat
$ neutron net-create physnet1-101 --shared \
--provider:physical_network external \
--provider:network_type vlan \
--provider:segmentation_id 101
I'm trying to understand what degree of sophistication these provider
networks have. Are they just an interface to a MAC-learning switch
(possibly VLAN-tagged)? Or do provider networks go beyond that, with
the features that one would expect from an OVN logical network
(e.g. port security, ACLs, distributed routing and firewalling, ...)?
(Kyle and Salvatore, please sanity check me on this.)
AFAIK, it is simply an interface to a MAC-learning switch, possibly VLAN
tagged.
It is not expected that a provider network would provide port security
or ACLs (security groups). Those would still be the responsibility of
OVN in this case.
A provider network *may* (but usually does) handle routing and SNAT/DNAT
if necessary. In that case it is managed externally to Neutron. The
only knowledge Neutron has is about the address space on the provider
network, since Neutron provides IPAM. Continuing with the example
$ neutron subnet-create provider-101 203.0.113.0/24 \
Post by Ben Pfaff
--enable-dhcp --gateway 203.0.113.1
Neutron would do address assignment and provide the DHCP server for this
network. 203.0.113.1 would be the router.
Neutron (and thus, OVN) would provide port-level firewalls (Neutron
security groups) using OVN ACLs. Additional firewalls (such as at the
router) may exist, but Neutron doesn't need to know about it as it's
expected to be managed externally.
I had to read this several times, but maybe I understand it now. Let me
recap for verification.
A "tenant network" is what OVN calls a logical network. OVN can
construct it as an L2-over-L3 overlay with STT or Geneve or whatever.
Tenant networks can be connected to physical networks via OVN gateways.
A "provider network" is just a physical L2 network (possibly
VLAN-tagged). In such a network, on the sending side, OVN would rely on
normal L2 switching for packets to reach their destinations, and on the
receiving side, OVN would not have a reliable way to determine the
source of a packet (it would have to infer it from the source MAC). Is
that accurate?
Yes, all of that matches my understanding of things.

I worry that not being able to explain it well might mean I don't have
it all right, so I hope some other Neutron devs chime in to confirm, as
well.
--
Russell Bryant
Ben Pfaff
2015-06-23 21:10:19 UTC
Permalink
Post by Russell Bryant
Post by Ben Pfaff
Post by Russell Bryant
Provider Networks
=================
OpenStack Neutron currently has a feature referred to as "provider
networks". This is used as a way to define existing physical networks
that you would like to integrate into your environment.
In the simplest case, it can be used in environments where they have no
interest in tenant networks. Instead, they want all VMs hooked up
directly to a pre-defined network in their environment. This use case
is actually popular for private OpenStack deployments.
[...]
Post by Russell Bryant
Post by Ben Pfaff
I had to read this several times, but maybe I understand it now. Let me
recap for verification.
A "tenant network" is what OVN calls a logical network. OVN can
construct it as an L2-over-L3 overlay with STT or Geneve or whatever.
Tenant networks can be connected to physical networks via OVN gateways.
A "provider network" is just a physical L2 network (possibly
VLAN-tagged). In such a network, on the sending side, OVN would rely on
normal L2 switching for packets to reach their destinations, and on the
receiving side, OVN would not have a reliable way to determine the
source of a packet (it would have to infer it from the source MAC). Is
that accurate?
Yes, all of that matches my understanding of things.
I worry that not being able to explain it well might mean I don't have
it all right, so I hope some other Neutron devs chime in to confirm, as
well.
OK, let's go on then.

Some more recap, on the reason why this would need to be in OVN. If I'm
following, that's because users are likely to want to have VMs that
connect both to provider networks and to tenant networks on the same
hypervisor, and that means that they need Neutron plugins for each of
those, and there's naturally a reluctance to install the bits for two
different plugins on every hypervisor. Is that correct? If it is, then
I'll go back and reread the ideas we had elsewhere in this thread; I'm
better equipped to understand them now.

Thanks,

Ben.
Russell Bryant
2015-06-23 21:23:38 UTC
Permalink
Post by Ben Pfaff
Post by Russell Bryant
Post by Ben Pfaff
Post by Russell Bryant
Provider Networks
=================
OpenStack Neutron currently has a feature referred to as "provider
networks". This is used as a way to define existing physical networks
that you would like to integrate into your environment.
In the simplest case, it can be used in environments where they have no
interest in tenant networks. Instead, they want all VMs hooked up
directly to a pre-defined network in their environment. This use case
is actually popular for private OpenStack deployments.
[...]
Post by Russell Bryant
Post by Ben Pfaff
I had to read this several times, but maybe I understand it now. Let me
recap for verification.
A "tenant network" is what OVN calls a logical network. OVN can
construct it as an L2-over-L3 overlay with STT or Geneve or whatever.
Tenant networks can be connected to physical networks via OVN gateways.
A "provider network" is just a physical L2 network (possibly
VLAN-tagged). In such a network, on the sending side, OVN would rely on
normal L2 switching for packets to reach their destinations, and on the
receiving side, OVN would not have a reliable way to determine the
source of a packet (it would have to infer it from the source MAC). Is
that accurate?
Yes, all of that matches my understanding of things.
I worry that not being able to explain it well might mean I don't have
it all right, so I hope some other Neutron devs chime in to confirm, as
well.
OK, let's go on then.
Some more recap, on the reason why this would need to be in OVN. If I'm
following, that's because users are likely to want to have VMs that
connect both to provider networks and to tenant networks on the same
hypervisor, and that means that they need Neutron plugins for each of
those, and there's naturally a reluctance to install the bits for two
different plugins on every hypervisor. Is that correct? If it is, then
I'll go back and reread the ideas we had elsewhere in this thread; I'm
better equipped to understand them now.
That is correct, yes.
--
Russell Bryant
Salvatore Orlando
2015-06-23 21:58:25 UTC
Permalink
I'm afraid I have to start bike shedding on this thread too.
Apparently that's what I do best.

More inline,
Salvatore
Post by Russell Bryant
Post by Ben Pfaff
Post by Russell Bryant
Post by Ben Pfaff
Post by Russell Bryant
Provider Networks
=================
OpenStack Neutron currently has a feature referred to as "provider
networks". This is used as a way to define existing physical
networks
Post by Ben Pfaff
Post by Russell Bryant
Post by Ben Pfaff
Post by Russell Bryant
that you would like to integrate into your environment.
In the simplest case, it can be used in environments where they
have no
Post by Ben Pfaff
Post by Russell Bryant
Post by Ben Pfaff
Post by Russell Bryant
interest in tenant networks. Instead, they want all VMs hooked up
directly to a pre-defined network in their environment. This use
case
Post by Ben Pfaff
Post by Russell Bryant
Post by Ben Pfaff
Post by Russell Bryant
is actually popular for private OpenStack deployments.
[...]
Post by Russell Bryant
Post by Ben Pfaff
I had to read this several times, but maybe I understand it now. Let
me
Post by Ben Pfaff
Post by Russell Bryant
Post by Ben Pfaff
recap for verification.
A "tenant network" is what OVN calls a logical network. OVN can
construct it as an L2-over-L3 overlay with STT or Geneve or whatever.
Tenant networks can be connected to physical networks via OVN gateways.
A "provider network" is just a physical L2 network (possibly
VLAN-tagged). In such a network, on the sending side, OVN would rely
on
Post by Ben Pfaff
Post by Russell Bryant
Post by Ben Pfaff
normal L2 switching for packets to reach their destinations, and on the
receiving side, OVN would not have a reliable way to determine the
source of a packet (it would have to infer it from the source MAC). Is
that accurate?
While this is correct, it is also restrictive - as that would imply that a
"provider network" is just a physical L2 segment on the data centre.
Therefore logical ports on a provider networks would be pretty much pass
through to the physical network. While it is correct that they might be
mapped to OVS ports on a bridge doings plain L2 forwarding onto a physical
network, this does not mean that L2 forwarding is the only thing that one
can do on provider networks.

A provider network is, from the neutron perspective, exactly like any other
logical network, including tenant networks. What changes are bindings (or
mappings, I don't know what's the correct OVN terminology). These bindings
define three aspects:
1 - the transport type (VLAN, GRE, STT, VxLAN, etc)
2 - the physical network, if any
3 - the segmentation id on the physical network, if any,

For tenant networks, bindings are implicit and depend on what the control
plan defaults to. As Ben was suggesting this could STT or Geneve.
For provider networks, these bindings are explicit, as the admin defines
them. For instance I want this network to be mapped to VLAN 666 on physical
network MEH.

In practical terms with provider networks the control plane must honour the
specification made in the neutron request concerning transport bindings for
the logical networks. If it can't honour these mapping - for instance if it
does not support the select transport type - it must return an error.
Nevertheless the control plane still treats provider networks like any
other network. You can have services like DHCP on them (even if often is
not a great idea), apply security groups to its ports, uplink them to
logical routers, and so on.
Post by Russell Bryant
Post by Ben Pfaff
Post by Russell Bryant
Yes, all of that matches my understanding of things.
I worry that not being able to explain it well might mean I don't have
it all right, so I hope some other Neutron devs chime in to confirm, as
well.
OK, let's go on then.
Some more recap, on the reason why this would need to be in OVN. If I'm
following, that's because users are likely to want to have VMs that
connect both to provider networks and to tenant networks on the same
hypervisor, and that means that they need Neutron plugins for each of
those, and there's naturally a reluctance to install the bits for two
different plugins on every hypervisor. Is that correct? If it is, then
I'll go back and reread the ideas we had elsewhere in this thread; I'm
better equipped to understand them now.
I believe people would love the idea of being able to deploy multiple
plugins on the same neutron deployments and handle some kind of networks
with one plugin and other kind of networks with the other plugin.
Unfortunately Neutron cannot yet quite do that, unless we add some
machinery into the ML2 plugin.

One reason I see for having them in OVN is that these providers networks
are not isolated from the rest of the logical network topology. You should
still be able to apply security groups to them or uplink them a logical
router, as per my previous comment. This is not necessarily impossible with
different plugins, but it would probably be more efficient is entirely
handled through OVN.
Post by Russell Bryant
That is correct, yes.
--
Russell Bryant
Ben Pfaff
2015-06-23 22:56:15 UTC
Permalink
Post by Salvatore Orlando
I'm afraid I have to start bike shedding on this thread too.
Apparently that's what I do best.
These are important clarifications, not bikeshedding.
Post by Salvatore Orlando
Post by Ben Pfaff
A "tenant network" is what OVN calls a logical network. OVN can
construct it as an L2-over-L3 overlay with STT or Geneve or
whatever. Tenant networks can be connected to physical networks via
OVN gateways.
A "provider network" is just a physical L2 network (possibly
VLAN-tagged). In such a network, on the sending side, OVN would
rely on normal L2 switching for packets to reach their destinations,
and on the receiving side, OVN would not have a reliable way to
determine the source of a packet (it would have to infer it from the
source MAC). Is that accurate?
While this is correct, it is also restrictive - as that would imply that a
"provider network" is just a physical L2 segment on the data centre.
Therefore logical ports on a provider networks would be pretty much pass
through to the physical network. While it is correct that they might be
mapped to OVS ports on a bridge doings plain L2 forwarding onto a physical
network, this does not mean that L2 forwarding is the only thing that one
can do on provider networks.
A provider network is, from the neutron perspective, exactly like any other
logical network, including tenant networks. What changes are bindings (or
mappings, I don't know what's the correct OVN terminology). These bindings
1 - the transport type (VLAN, GRE, STT, VxLAN, etc)
2 - the physical network, if any
3 - the segmentation id on the physical network, if any,
For tenant networks, bindings are implicit and depend on what the control
plan defaults to. As Ben was suggesting this could STT or Geneve.
For provider networks, these bindings are explicit, as the admin defines
them. For instance I want this network to be mapped to VLAN 666 on physical
network MEH.
In practical terms with provider networks the control plane must honour the
specification made in the neutron request concerning transport bindings for
the logical networks. If it can't honour these mapping - for instance if it
does not support the select transport type - it must return an error.
Nevertheless the control plane still treats provider networks like any
other network. You can have services like DHCP on them (even if often is
not a great idea), apply security groups to its ports, uplink them to
logical routers, and so on.
OK, let me take another stab at a recap, then.

For a tenant network, it is outside the scope of Neutron to dictate or
configure how packets are transported among VMs. Instead, that is
delegated to the plugin itself or to whatever the plugin configures
(e.g. in this case, to OVN).

For a provider network, the administrator (via Neutron) configures use
of a specific transport in a specific way. A Neutron plugin must
operate over that configured transport in that way, or if cannot then it
must refuse to operate at all.

OK, if that all that is correct, does the following logical extension
also hold? A provider network implementation is expected to
transparently interoperate with preexisting software that shares a given
transport. For example, if I set up a provider network with Neutron on
a particular Ethernet network, and I have a bunch of physical machines
attached to the same Ethernet network, then I would expect my Neutron
VMs attached to the physical network to be able to communicate back and
forth with those physical machines.

If that is the case, then I guess one description of the two different
types of network is this: a Neutron plugin may *define* a tenant
network, but a Neutron plugin only *participates* in a provider network.
Is that fair?

(I apologize if all this should be obvious to Neutron veterans. I'm new
to this!)

Thanks,

Ben.
Russell Bryant
2015-06-24 14:15:37 UTC
Permalink
Post by Ben Pfaff
Post by Salvatore Orlando
I'm afraid I have to start bike shedding on this thread too.
Apparently that's what I do best.
These are important clarifications, not bikeshedding.
Post by Salvatore Orlando
Post by Ben Pfaff
A "tenant network" is what OVN calls a logical network. OVN can
construct it as an L2-over-L3 overlay with STT or Geneve or
whatever. Tenant networks can be connected to physical networks via
OVN gateways.
A "provider network" is just a physical L2 network (possibly
VLAN-tagged). In such a network, on the sending side, OVN would
rely on normal L2 switching for packets to reach their destinations,
and on the receiving side, OVN would not have a reliable way to
determine the source of a packet (it would have to infer it from the
source MAC). Is that accurate?
While this is correct, it is also restrictive - as that would imply that a
"provider network" is just a physical L2 segment on the data centre.
Therefore logical ports on a provider networks would be pretty much pass
through to the physical network. While it is correct that they might be
mapped to OVS ports on a bridge doings plain L2 forwarding onto a physical
network, this does not mean that L2 forwarding is the only thing that one
can do on provider networks.
A provider network is, from the neutron perspective, exactly like any other
logical network, including tenant networks. What changes are bindings (or
mappings, I don't know what's the correct OVN terminology). These bindings
1 - the transport type (VLAN, GRE, STT, VxLAN, etc)
2 - the physical network, if any
3 - the segmentation id on the physical network, if any,
For tenant networks, bindings are implicit and depend on what the control
plan defaults to. As Ben was suggesting this could STT or Geneve.
For provider networks, these bindings are explicit, as the admin defines
them. For instance I want this network to be mapped to VLAN 666 on physical
network MEH.
In practical terms with provider networks the control plane must honour the
specification made in the neutron request concerning transport bindings for
the logical networks. If it can't honour these mapping - for instance if it
does not support the select transport type - it must return an error.
Nevertheless the control plane still treats provider networks like any
other network. You can have services like DHCP on them (even if often is
not a great idea), apply security groups to its ports, uplink them to
logical routers, and so on.
This is great clarification. Thanks, Salvatore.

I've been looking at this very focused on some specific use cases (using
the provider networks extensions to specify 'flat' or 'vlan' transport
type on a specific physical network) as that is what I have seen come up
regularly.

Do people actually use this to specify other transport types? Do you
know of any good references on uses? For example, I'm not sure what is
expected if you only set the transport type to VxLAN or whatever. Is it
just "use VxLAN, but exactly how is left up to the backend" ? Or is it
more well defined than that?
Post by Ben Pfaff
OK, let me take another stab at a recap, then.
For a tenant network, it is outside the scope of Neutron to dictate or
configure how packets are transported among VMs. Instead, that is
delegated to the plugin itself or to whatever the plugin configures
(e.g. in this case, to OVN).
For a provider network, the administrator (via Neutron) configures use
of a specific transport in a specific way. A Neutron plugin must
operate over that configured transport in that way, or if cannot then it
must refuse to operate at all.
OK, if that all that is correct, does the following logical extension
also hold? A provider network implementation is expected to
transparently interoperate with preexisting software that shares a given
transport. For example, if I set up a provider network with Neutron on
a particular Ethernet network, and I have a bunch of physical machines
attached to the same Ethernet network, then I would expect my Neutron
VMs attached to the physical network to be able to communicate back and
forth with those physical machines.
If that is the case, then I guess one description of the two different
types of network is this: a Neutron plugin may *define* a tenant
network, but a Neutron plugin only *participates* in a provider network.
Is that fair?
(I apologize if all this should be obvious to Neutron veterans. I'm new
to this!)
Based on this thread, and some conversations I've seen between other
Neutron devs, this is definitely *not* obvious. The recap you have here
sounds right to me, though.

However, as discussed above, what's expected from the Neutron backend
for a transport type other than 'vlan' or 'flat' with a physical network
specified isn't clear to me.
--
Russell Bryant
Salvatore Orlando
2015-06-24 15:05:43 UTC
Permalink
[resending to the ovs-dev as I sent my original reply to Russell only]

Comments inline

Salvatore
Post by Russell Bryant
Post by Ben Pfaff
Post by Salvatore Orlando
I'm afraid I have to start bike shedding on this thread too.
Apparently that's what I do best.
These are important clarifications, not bikeshedding.
Post by Salvatore Orlando
Post by Ben Pfaff
A "tenant network" is what OVN calls a logical network. OVN can
construct it as an L2-over-L3 overlay with STT or Geneve or
whatever. Tenant networks can be connected to physical networks via
OVN gateways.
A "provider network" is just a physical L2 network (possibly
VLAN-tagged). In such a network, on the sending side, OVN would
rely on normal L2 switching for packets to reach their destinations,
and on the receiving side, OVN would not have a reliable way to
determine the source of a packet (it would have to infer it from the
source MAC). Is that accurate?
While this is correct, it is also restrictive - as that would imply
that a
Post by Ben Pfaff
Post by Salvatore Orlando
"provider network" is just a physical L2 segment on the data centre.
Therefore logical ports on a provider networks would be pretty much pass
through to the physical network. While it is correct that they might be
mapped to OVS ports on a bridge doings plain L2 forwarding onto a
physical
Post by Ben Pfaff
Post by Salvatore Orlando
network, this does not mean that L2 forwarding is the only thing that
one
Post by Ben Pfaff
Post by Salvatore Orlando
can do on provider networks.
A provider network is, from the neutron perspective, exactly like any
other
Post by Ben Pfaff
Post by Salvatore Orlando
logical network, including tenant networks. What changes are bindings
(or
Post by Ben Pfaff
Post by Salvatore Orlando
mappings, I don't know what's the correct OVN terminology). These
bindings
Post by Ben Pfaff
Post by Salvatore Orlando
1 - the transport type (VLAN, GRE, STT, VxLAN, etc)
2 - the physical network, if any
3 - the segmentation id on the physical network, if any,
For tenant networks, bindings are implicit and depend on what the
control
Post by Ben Pfaff
Post by Salvatore Orlando
plan defaults to. As Ben was suggesting this could STT or Geneve.
For provider networks, these bindings are explicit, as the admin defines
them. For instance I want this network to be mapped to VLAN 666 on
physical
Post by Ben Pfaff
Post by Salvatore Orlando
network MEH.
In practical terms with provider networks the control plane must honour
the
Post by Ben Pfaff
Post by Salvatore Orlando
specification made in the neutron request concerning transport bindings
for
Post by Ben Pfaff
Post by Salvatore Orlando
the logical networks. If it can't honour these mapping - for instance
if it
Post by Ben Pfaff
Post by Salvatore Orlando
does not support the select transport type - it must return an error.
Nevertheless the control plane still treats provider networks like any
other network. You can have services like DHCP on them (even if often is
not a great idea), apply security groups to its ports, uplink them to
logical routers, and so on.
This is great clarification. Thanks, Salvatore.
I've been looking at this very focused on some specific use cases (using
the provider networks extensions to specify 'flat' or 'vlan' transport
type on a specific physical network) as that is what I have seen come up
regularly.
Do people actually use this to specify other transport types? Do you
know of any good references on uses? For example, I'm not sure what is
expected if you only set the transport type to VxLAN or whatever. Is it
just "use VxLAN, but exactly how is left up to the backend" ? Or is it
more well defined than that?
While the API allows you to map a provider network to a specific VxLAN VNI
or GRE tunnel key, I have never seen this in practice.
Also, I have no idea of what concrete use case might require this scenario.
But it won't be the first time someone would come up with some science-fi
use case that is a key requirement for their deployment!

Nevertheless, I don't think that an implementation of provider networks
must support all transport types. Flat and Vlan only is fine for me.
API requests for different transport types should be rejected. I believe
that the provider networks code already has hooks to allow the plugin to
declare which transport types are allowed.
Post by Russell Bryant
Post by Ben Pfaff
OK, let me take another stab at a recap, then.
For a tenant network, it is outside the scope of Neutron to dictate or
configure how packets are transported among VMs. Instead, that is
delegated to the plugin itself or to whatever the plugin configures
(e.g. in this case, to OVN).
For a provider network, the administrator (via Neutron) configures use
of a specific transport in a specific way. A Neutron plugin must
operate over that configured transport in that way, or if cannot then it
must refuse to operate at all.
That's how neutron is supposed to work.
Post by Russell Bryant
Post by Ben Pfaff
OK, if that all that is correct, does the following logical extension
also hold? A provider network implementation is expected to
transparently interoperate with preexisting software that shares a given
transport. For example, if I set up a provider network with Neutron on
a particular Ethernet network, and I have a bunch of physical machines
attached to the same Ethernet network, then I would expect my Neutron
VMs attached to the physical network to be able to communicate back and
forth with those physical machines.
Absolutely
Post by Russell Bryant
Post by Ben Pfaff
If that is the case, then I guess one description of the two different
types of network is this: a Neutron plugin may *define* a tenant
network, but a Neutron plugin only *participates* in a provider network.
Is that fair?
That is correct. Indeed neutron operates under the *wrong* assumption that
it controls IP addressing on provider networks, which is not true and this
might lead to issues such as duplicated IPs, unless appropriate slicing of
IP CIDRs is performed.
Post by Russell Bryant
(I apologize if all this should be obvious to Neutron veterans. I'm new
Post by Ben Pfaff
to this!)
Based on this thread, and some conversations I've seen between other
Neutron devs, this is definitely *not* obvious. The recap you have here
sounds right to me, though.
However, as discussed above, what's expected from the Neutron backend
for a transport type other than 'vlan' or 'flat' with a physical network
specified isn't clear to me.
I think the best answer to this point is that having a provider network
backed by something which is not a physical network or a vlan is a purely
theoretical use case. The API allows you to do so, but there's no practical
use for that.
Unless someone chimes in this thread and proves me wrong.
Post by Russell Bryant
--
Russell Bryant
Loading...