Multicast and Firewalls - Customer Lessons Learned

Background on multicast for the beginners among us, some details on the campus multicast stuff which aren't collected anywhere central, and a sample configuration for a Cisco Pix/Asa security appliance (firewall). Netscreen users will find the information useful, but will have to write their own configurations.

Documentation contributed from Jim Leinweber.

Background

I've been fighting with Cisco Pix configurations to support multicast recently, notably to allow access to the DATN channels (datn.wisc.edu). Eventually we mostly succeeded.

In the process, I had to educate myself a bit on multicast. Figuring that there may be other multicast neophytes on this list, I'd like to pass on some of the stuff I found out.

Note that you are planning to support IPv6, you will have to deal with multicast, because in IPv6 they eliminated broadcast.

The first issue on getting multicast through a firewall is whether you are transparent (like the campus FWSM blades) or routed. Most of you with firewalls are currently transparent, I think. However, if you have any of the "high risk" data regulated by the WI identify theft stature, HIPAA, FERPA, GLB or whatever, you should take a look at the PCI-DSS (payment card industry data security standard) requirement 1.3. They want you to have separate subnets for perimeter hosts (DMZ), wireless, main LAN, and DB servers, and fairly tight rules between them. If you are subject to PCI-DSS requirements, your firewall is going to end up in routed mode.

Before we pursue configuration issues further, we need to do some background.

Multicast for Neophytes

Multicast data, say for audio or video streams, has unicast source IP's and destination IP's in the range 224.0.0.0/4. The data payload is currently always UDP. Local multicast routing decisions are based on IGMP (protocol 2) and PIM (protocol 103). Clients send IGMP join messages to an upstream device with TTL 1, which processes them, adds the multicast destination IP to its list of subscribed groups, and emits in turn a new IGMP message toward it's own upstream device. On a LAN, ethernet packets will use special multicast destination addresses which repeat a chunk of the multicast group IP.

Within an autonomous system, PIM (protocol independent routing, sparse mode) has a centralized RP (rendezvous point) which keeps track of where multicast senders are for joined group clients. Between autonomous systems, multicast routes are advertised using some other protocol, typically extensions to BGP. Eventually the chain of join messages reaches the multicast source, which starts sending traffic back down the chain toward the client.

IGMP messages stop as soon as you run into a PIM enabled router. From there the PIM router sends join messages towards the RP or source and a distribution tree gets built. IGMP only goes beyond one hop when IGMP forwarding/proxying is enabled (which is described later). Some consumer level Linksys routers also employ an IGMP proxy to allow multicast traffic to penetrate within a local NAT, when configured to do so.

Each intermediate router, firewall, or other device has to keep track of which interfaces have multicast clients on them and send subscribed group traffic to only those interfaces. When a client is done, it sends an IGMP leave message upstream, and the traffic stops. If a client simply vanishes, the upstream device is periodically doing IGMP queries to see what groups are active on a particular interface/LAN. Each active client has to respond with an IGMP group membership report packet. Any groups which don't get responses get pruned by the querying device, and the traffic eventually stops.

The RFC's for multicast and the group assignments by IANA are a bit of a mess; they have mutated a lot over the years, and continue to do so. For example, so far IGMP version 3 exists only in draft form; it hasn't been converted to standards track yet, much less ratified by the IETF. Multicast address assignments are also a bit of a mess. Some particularly important group ranges are:

  • 224.0.0.0/8 is globally routable and directly assigned by IANA. 224.0.0.0/24 is reserved for local LAN stuff. Your clients will definitely be using a bunch of these.
  • 233.0.0.0/8 is the "GLOP" range, where autonomous systems with 16- bit AS's can emit 233.X.Y.0/24 and be globally accessible (clients send joins toward the AS, inside the AS the actual sources register with their PIM RP). Most remote sources you run across will be in this range, because all other globally routable multicast groups have to go to the formality of a specific assignment by IANA. Each 16-bit AS gets its own /24.
  • 239.0.0.0/24 is for administratively scoped multicast within an autonomous system (routing domain). Most local sources you run across will be in this range, e.g. our DATN channels.
This ignores a lot of details, but highlights the major stuff you are going to run into..

Multicast at the UW

Internet2, the campus backbone, and your XXI network building edge switches are all multicast enabled, or can be. The local DATN channels send from 128.104.30.0/24 and multicast to 239.1.1.0/24. The campus routing infrastructure uses PIM-SM with RP 144.92.20.137. Your upstream virtual router port uses transit or LAN subnet PIM "designated router" election priority 1 (bigger wins; for ties largest IP address wins, which will tend to be your firewall.)

Northwestern University CSPAN channels multicast to 233.0.103.0/24. U. of Canberra international channels multicast to 233.70.142.0/24.

It can also be interesting to listen to the SAP group (224.2.127.254 global, I believe there is one in 239/8 for administratively scoped) to see what multicast content is being offered. Most of it is either broken or pointless, but there are some interesting things out there. I found a UK radio station for which I was previously using a unicast stream that is now being multicast. DATN channels don't appear to be advertised via SAP. VLC can add SAP announcents to its playlist, see Using VLC on University Campus to Recieve DTV stations

Surprisingly, the DATN channel clients emit multicast responses back to the general network, so expect to do multicast both in and out, even if you don't think you have any local services.

The Cisco default interval for IGMP queries is 125 seconds; expect to see something between 30 seconds and that.

Multicast and Cisco Pix

For you non PCI-DSS compliant folks with the transparent mode firewalls you can ignore PIM, use IGMP "stub forwarding" configurations, and punt the whole routing and group join/prune issue upstream. You just need to make sure your ingress and egress filters allow IGMP traffic, and UDP traffic with most multicast destinations. If you have access to one of the campus FWSM departmental firewalls, DoIT wrote a nice set of rules, and you can just crib from those.

Those of you, like me, running a firewall in routed mode are going to have a bit more fun. The first thing you need to know is that Cisco Pix support for multicast is buggy. Older versions may do router reloads just passing traffic. Even on 7.2(X), PIM is still too broken to use. And I managed to crash a Pix just asking "show pim tunnel interface ..." last Saturday.

A particular current Pix difficulty is with IGMP upstream. With PIM configured, it doesn't send upstream IGMP joins, period. With IGMP forwarding instead, it still doesn't send the first IGMP join up, so the client sees no data until after the IGMP query-timeout interval expires and the IGMP query/membership reply cycle operates at least once. With a default timeout of 125 seconds, your client waits some more or less random amount between 3 seconds and 3 minutes for data to start flowing. The campus backbone is fine - you can be receiving data from Australia in under 3 seconds if you either aren't going through the Pix, or the Pix is already subscribed to the multicast group by any of: another client, a previous membership which hasn't expired, or a static join.

A consequence of this is that the DATN multicast test applet will usually fail, because it times out after about 5 seconds. More patient clients will eventually succeed.

The next bug is that if you assume you can turn PIM off just because you aren't using it, think again. Any interface with PIM turned off won't ever emit IGMP joins upstream. Finally, if your Pix wins the DR election on the transit network upstream, which by default it will, it probably won't forward IGMP either.

So the multicast part of a working IGMP configuration on a routed pix could look something like this:

  nameif eth0 outside
  nameif eth1 inside
  nameif eth2 dmz
  multicast-routing
  interface eth0
    pim dr-priority 0
  interface eth1
    igmp forward interface outside
    igmp query-interval 30
  interface eth2
    no pim
    no igmp
Notice that we lowered the igmp query timeout. That increases (slightly) the amount of IGMP traffic on your LAN, but it cuts the client wait for data to stream back by a lot. Your users will thank you.

You don't need any "mroute"; your unicast default route will suffice. (If you do use mroute, the argument is the unicast network of a set of sources, not the group they multicast back to.)

An example of the kinds of access-list rules you might want using the nifty 7.x group syntax (all you 6.X people should upgrade ASAP ...)

object-group network MCAST-IANA
 description multicast ranges in use by IANA: 224, 232 ssm, 233 glop, 239 admin
 network-object 239.0.0.0 255.0.0.0
 network-object 224.0.0.0 255.0.0.0
 network-object 232.0.0.0 254.0.0.0
object-group network LOCAL-MCAST
 description LAN multicast stuff
 network-object 224.0.0.0 255.255.255.0
object-group network BAD-MCAST-OUT
 description multicast services which should not leave
 network-object host 224.0.0.251
 network-object host 224.0.1.22
 network-object host 224.0.1.35
 network-object host 229.55.150.208
 network-object host 239.255.255.250
 network-object host 239.255.255.253
object-group network BAD-MCAST-IN
 description multicast services which should not enter
 network-object host 224.0.0.251
 network-object host 239.255.255.253
object-group network ALL-MCAST
 description full multicast block
 network-object 224.0.0.0 240.0.0.0


access-list OUT101 extended permit igmp any any
access-list OUT101 extended deny tcp any object-group ALL-MCAST
access-list OUT101 extended deny icmp any object-group ALL-MCAST
access-list OUT101 extended permit pim any any
access-list OUT101 extended deny ip any object-group BAD-MCAST-OUT
access-list OUT101 extended permit udp any object-group MCAST-IANA
access-list OUT101 extended deny ip any object-group ALL-MCAST

access-list IN102 extended permit igmp any any
access-list IN102 extended deny ip any object-group BAD-MCAST-IN
access-list IN102 extended permit udp any object-group MCAST-IANA
access-list IN102 extended permit pim any any

access-group OUT101 out interface outside
access-group IN102 in interface outside
Obviously you would normally want additional unicast ACL rules.

And if it wasn't already obvious, multicast clients have to be bare metal on wired campus building jacks. VMWare guests don't seem to do multicast, it's off on the campus wireless network, the campus VPN service doesn't forward it, Charter Broadband doesn't support it, etc.

Thanks to all the folks at DoIT who helped me figure some of this stuff out. Dale Carder provided some key information.

-- Jim Leinweber
State Laboratory of Hygiene, University of Wisconsin - Madison

Ed note: Additional Contributions from Marc Bourgeois at UW Housing




Keywords:multicast firewall fwsm pix netscreen datn   Doc ID:5604
Owner:Dale C.Group:Network Services
Created:2007-03-20 19:00 CDTUpdated:2012-02-03 17:04 CDT
Sites:Network Services, Systems & Network Control Center
Feedback:  24   5