Episode Details
Back to Episodes
107: In their midst
Published 10 years, 6 months ago
Description
This week, we are going to be talking with Aaron Poffenberger, who has much to share about his first-hand experience in infiltrating Linux conferences with BSD-goodness.
This episode was brought to you by
Headlines
Alexander Motin implements CTL High Availability
- CTL HA allows two .head. nodes to be connected to the same set of disks, safely
- An HA storage appliance usually consists of 2 totally separate servers, connected to a shared set of disks in separate JBOD sleds
- The problem with this setup is that if both machines try to use the disks at the same time, bad things will happen
- With CTL HA, the two nodes can communicate, in this case over a special TCP protocol, to coordinate and make sure they do not step on each others toes, allowing safe operation
- The CTL HA implementation in FreeBSD can operate in the following four modes:
- Active/Unavailable -- without interlink between nodes
- Active/Standby -- with the second node handling only basic LUN discovery and reservation, synchronizing with the first node through the interlink
- Active/Active -- with both nodes processing commands and accessing the backing storage, synchronizing with the first node through the interlink
- Active/Proxy -- with second node working as proxy, transferring all commands to the first node for execution through the interlink
- The custom TCP protocol has no authentication, so it should never be enabled on public interfaces
- Doc Update ***
Panel Self-Refresh support lands in DragonFlyBSD
- In what seems almost weekly improvements being made to the Xorg stack for DragonFly, we now have Panel Self-Refresh landing, thanks to Imre Vadász
- Understanding Panel Self-Refresh and More about Panel Self-Refresh
- In a nutshell, the above articles talks about how in the case of static images on the screen, power-savings can be obtained by refreshing static images from display memory (frame-buffer), disabling the video processing of the CPU/GPU and associated pipeline during the process.
- And just for good measure, Imre also committed some further Intel driver cleanup, reducing the diff with Linux 3.17 ***
Introducing Sluice, a new ZFS snapshot management tool
- A new ZFS snapshot management tool written in Python and modeled after Apple.s Time Machine
- Simple command line interface
- No configuration files, settings are stored as ZFS user properties
- Includes simple remote replication support
- Can operate on remote systems with the zfs://user@host/path@snapname url schema
- Future feature list includes .import. command to moved files from non-ZFS storage to ZFS and create a snap


