there is M10I router with Re-850. so there are 2 different eBGP uplinks (both with full view), so need to do next thing:
1 routing table for 1st eBGP peer , 1 separate routing table for 2nd eBGP peer and 3rd routing table mixed
also need 1 forwarding table, 1st for forwarding packets only to first Peer, 2nd for forwarding only to 2nd peer and 3rd also mixed forwarding packets to best route.
I need that to be able to provide to customers separate BGP full views by 2 different sessions, 1st for full view of 1st eBGP uplink, 2nd for full view of 2nd eBGP Uplink.
so and mixed full view need for uther customers who need that service.
so that solution is easy if have 3 routers, on 1st will terminate 1st uplink, on second 2nd uplink, and on 3rd by iBGP will terminate 2 BGP sessions from 1st and 2nd routers.
and in case of customer request will terminate customer on needed router.
so I don't have 3 routers I have only 1.
so in theory I know how to do that:
need to have 3 BGP tables, need to have 3 forwarding tables (2 of forwarding tables need to have only default route inside, 3dr forwarding table need to include all best routes calculated from both BGP tables.)
so please help me configure router with that scenario, of course if you understund what I maen 😄
also if you provide me some documentation I will need. I have very small experience with JunOS so I cant now read all documentation on juniper site. just need to make that configuration faster.
It sounds like you will want to employ both routing-instances and filter-based forwarding.
If you want to keep three separate copies of the core tables in your router, you will need to configure two additional routing-instances to hold those routes. I would recommend creating two separate instances for the two BGP peers, and then using instance-exports to copy all those routes into the root inet.0 table for the mixed topology.
JUNOS treats routing-instances just like most routers would treat a vrf. So, any interfaces imported into that vrf would lookup routes in that table and not the global table. So if you have a downstream interface of ge-0/0/1, and you want it to use BGP table "B", make sure provider B is homed into the routing-instance "B" and simply add interface ge-0/0/1 to that table. Do the same to any interfaces you would want to strictly lookup in table "A", and any interfaces in the root table would then by default do a lookup in the shared inet.0 table.
JUNOS also allows you to specify a table for lookup using a feature called "filter-based-forwarding". This is a policy-based route, and is called by creating an ACL (called a "firewall filter" in JUNOS) with an action of "then routing-instance X". You can find examples in the JUNOS documentation. Here's one snippet, but a quick google search for "junos filter-based forwarding" will give you plenty of references: http://www.juniper.net/techpubs/software/junos/junos72/swconfig72-policy/html/firewall-config33.html
Now, with all that said, I would caution you before applying this configuration in production. In the M series routers, there are scaling parameters for both the control plane as well as the data plane. While the RE-850 ships with 1.5GB of RAM, which is plenty for holding multiple millions of BGP prefixes, the underlying hardware does have a fixed amount of table lookup memory.
If you are running an older ABC-based cFEB forwarding engine (called a cFEB) then it is limited to about half a million IPv4 prefixes in the ASIC lookup tables. As current core tables are in excess of 310k prefixes (April 2010), if you want three copies of full tables this would require more than a million concurrent prefixes in hardware.
The M7i and M10i now allow an "enhanced" cFEB, which is built on the newer "I-Chip" packet forwarding engine. This will allow just over a Million IPv4 prefixes in hardware -- which should just barely hold the three copies of your table.
However, I think there may be a more elegant way of handling this. Instead of using multiple routing-instances with full tables, why not just create two separate tables with default routes pointing to your upstream providers, and then terminate normal BGP peering in the central inet.0 table. This way, your hardware will only need to support one table, and you can still offer differential upstream forwarding. 🙂