No, your diagram should looks like this: User space +----------------------------------------------------------+ | | | +-------------------------------------+ | | | | | +-------+ +-------+ +---------+ +-------+ +-------+ | App#A | <-----> | CARPd | <-----> | ctsyncd | <-----> | App#B | <-----> | App#C | <-----> +---+---+ +---+---+ +---+-----+ +---+---+ +---+---+ | | | | | | | | ------------------------------------------------------------------------------------------ Kernel network I/O etc Or only have one BUS(and it is actually implemented using netlink). jamal> I relabeled the Apps. I suppose you see some apps using ctsyncd for something? You need to connect each application daemon to carpd, even using broadcast netlink. And for any in-kernel access you will need to create new App and new kernel part. jamal> App2app doesnt have to go across kernel unless it turns out it is the jamal> best way. jamal> Alternatives include: unix or local host sockets, IPCs such as pipes or jamal> just shared libraries. jamal> If we will extrapolate it we can create following: userspace carp determines that it is a master, it will suspend all kernel memory or dump /proc/kmem and begins to advertise it. Remote node receives it and has pretty the same firewall settings, flow controls and any in-kernel state. jamal> I havent studied what Harald proposes in details. I think that the slave would jamal> continously be getting master updates. jamal> The interesting thing about CARP is the ARP balancing feature in which X nodes jamal> maybe masters of different IP flows all within the same subnet. jamal> VRRP load balances by subnet. I am not sure how challenge this will present to jamal> to ctsyncd. No matter that it takes a long time. It make sence if App#X needs userspace access only. But here is other diagram: userspace | -----------------+------------------------------- CARP kernelspace | | +----------+-----+-----+---------+------- | | | | ct_sync iSCSI e1000 CPU My main idea for in-kernel CARP was to implement invisible HA mechanism suitable for in-kernel use. You do not need to create netlink protocol parser, you do not need to create extra userspace overhead, you do not need to create suitable for userspace control hooks in kernel infrastructure. Just register callback. But even with such simple approach you have opportunity to collaborate with userspace. If you need. Why creating all userspace cruft if/when you need only kernel one? jamal> jamal> so we now move appA, B, C to the kernel too? jamal> There is absolutely no need to put this in kernel space. jamal> If you do this, your next step should be to put zebra in the kernel jamal> Resume: With your approach any data flow MUST go through userspace arbiters with all overhead and complexity. With my approach any data flow _MAY_ go through userspace arbiters, but if you do_need/only_has in-kernel access than using in-kernel CARP is the only solution. jamal> jamal> Yes, there is a cost. How much? Read the paper on user space drivers it actually does jamal> some cost analysis. jamal> If you prove that it is too expensive to put it in user space then prove it and lets jamal> have a re-discussion jamal>