No, your diagram should looks like this: User space +----------------------------------------------------------+ | | | +-------------------------------------+ | | | | | +-------+ +-------+ +---------+ +-------+ +-------+ | App#X | <-----> | CARPd | <-----> | ctsyncd | <-----> | App#X | <-----> | App#X | <-----> +---+---+ +---+---+ +---+-----+ +---+---+ +---+---+ | | | | | | | | ------------------------------------------------------------------------------------------ Kernel network I/O etc Or only have one BUS(and it is actually implemented using netlink). You need to connect each application daemon to carpd, even using broadcast netlink. And for any in-kernel access you will need to create new App and new kernel part. If we will extrapolate it we can create following: userspace carp determines that it is a master, it will suspend all kernel memory or dump /proc/kmem and begins to advertise it. Remote node receives it and has pretty the same firewall settings, flow controls and any in-kernel state. No matter that it takes a long time. It make sence if App#X needs userspace access only. But here is other diagram: userspace | -----------------+------------------------------- CARP kernelspace | | +----------+-----+-----+---------+------- | | | | ct_sync iSCSI e1000 CPU My main idea for in-kernel CARP was to implement invisible HA mechanism suitable for in-kernel use. You do not need to create netlink protocol parser, you do not need to create extra userspace overhead, you do not need to create suitable for userspace control hooks in kernel infrastructure. Just register callback. But even with such simple approach you have opportunity to collaborate with userspace. If you need. Why creating all userspace cruft if/when you need only kernel one? Resume: With your approach any data flow MUST go through userspace arbiters with all overhead and complexity. With my approach any data flow _MAY_ go through userspace arbiters, but if you do_need/only_has in-kernel access than using in-kernel CARP is the only solution.