I have been a BeOS fan for a long time. Watched it come and go, followed the community through Zeta and into Haiku, and always felt like there was something unfinished there. An OS that got the fundamentals right but never got its shot at the mainstream. I only recently started contributing to VitruvianOS (V\OS) but I am all in on it. It is an operating system built on the Linux kernel that brings the BeOS desktop experience to modern hardware, and I think it has a real chance at finishing what BeOS started.

the goal#

BeOS had this quality where everything felt immediate. You could drag windows around while playing video and burning a CD and nothing hitched. That was not an accident. It came from deep architectural decisions: pervasive multithreading, a messaging-based IPC system, an app server that owned the entire display pipeline, and a filesystem that was basically a queryable database. The whole system was designed as one coherent thing.

Modern Linux desktops are fast in aggregate throughput terms, but the subjective responsiveness is not the same. You have a kernel, then a display server, then a compositor, then a toolkit, then an application, and latency accumulates at every boundary. V\OS is an attempt to get that integrated feel back without giving up the practical benefits of running on Linux.

why linux, and how this differs from haiku and nvidia#

There are basically three approaches you can take here.

Haiku rewrote everything. Their own kernel, their own drivers, their own filesystem, their own networking stack. The result is architecturally clean, and they have done impressive work. But they are perpetually chasing hardware support. Every new WiFi chipset, every GPU generation, every NVMe controller needs custom driver work. After two decades the hardware coverage is still a real limitation for daily use. That is a hard problem and it does not get easier with time because the hardware surface area keeps growing.

That said, Haiku is genuinely helpful to V\OS. The Haiku codebase is the reference implementation of the BeOS API, and V\OS builds directly on top of it. The app server, the interface kit, the application kit, the support kit, the storage kit. V\OS takes this code and makes it run on Linux instead of the Haiku kernel. Haiku did the hard work of reimplementing and extending the BeOS API in a modern, open source codebase. V\OS gets to stand on that and focus on the kernel integration and hardware story. It is a genuine collaboration in the open source sense even if the projects have different goals.

NVIDIA’s approach is worth discussing separately because it illustrates a different philosophy. X512 did significant work getting NVIDIA’s binary drivers running on Haiku, which is notable because NVIDIA’s driver stack is essentially OS-agnostic. It is a black box. The binary blob talks to the hardware directly through its own abstractions and just needs a thin shim layer to interface with whatever kernel is underneath. That is powerful in terms of hardware support, but it means you are delegating a critical part of your graphics stack to something you cannot inspect, debug, or modify. For a project trying to own the full desktop experience, that is a tradeoff worth thinking carefully about.

V\OS takes a middle path. Use the Linux kernel as-is for hardware support, scheduling, and drivers. Then implement the BeOS kernel primitives that Linux does not have as a set of thin kernel modules called Nexus. These provide IPC ports (bounded message queues), named semaphores, shared memory areas with protection control, a thread messaging model, and filesystem event notifications via the node_monitor API. Each primitive is a character device exposed through /dev/nexus with an ioctl interface. The BeOS/Haiku application API runs on top of this, and you can port Haiku applications with minimal to no source changes.

The kernel ships with PREEMPT_RT patches by default for low-latency scheduling. There is a real tension here though. PREEMPT_RT converts spinlocks to sleeping locks (rt_mutex), which is great for latency guarantees but causes problems with kernel modules that were not written with that in mind. Some of the Nexus module code and third-party drivers hold spinlocks in contexts where sleeping is not safe under RT, and debugging those issues is subtle. You get lockups or warnings that do not reproduce on a non-RT kernel. It is an active area of work to audit the locking and make sure everything is RT-safe. The reference filesystems are XFS and SquashFS, both with full extended attribute support. Extended attributes are important here because BeOS-style systems use them extensively for file metadata, types, and indexing.

the graphics pipeline#

Most of the recent work has been on the display server, specifically the DRM/GBM compositor for the app server. There are two parallel tracks being developed.

The first is a DRM software-blit path. Double buffering via memcpy, udev hotplug for monitor detection, screen leasing, modeset connector management. This works everywhere including VMs without GPU acceleration. It is the reliable baseline.

The second is a GBM/EGL/GLES2 compositor. Real page flips, self-owning DRM buffers, a full GBM surface with EGL context and GLES2 rendering. When this path is available, you get actual GPU compositing.

The interesting part is the per-window compositing design. Right now, V\OS renders all windows into one shared back buffer using AGG (a CPU software renderer). When you drag a window, everything behind it has to be redrawn. The per-window design gives each window its own CPU-accessible buffer. AGG renders into each window buffer independently. Then the GLCompositor takes all those buffers, uploads them as textures, and assembles the final frame as textured quads rendered back-to-front. A glReadPixels pulls the result back to the front buffer, and SetCrtc pushes it to the display.

This means window dragging is just repositioning a quad. No redraw of obscured content. It also opens the door to transparency and other composition effects without redesigning the rendering pipeline.

The two-tier design is deliberate. If the GPU path is not available (no virgl in the VM, no hardware GL), the system falls back to the software DRM path automatically. You always get a working display.

things I have learned#

Testing an operating system is fundamentally different from testing application software. The cycle is: build in a Debian chroot, package an ISO with cpack and mkiso.sh, boot the ISO in QEMU/KVM, then read serial console logs and SSH in to inspect state. There is no unit test harness for “does the display server come up and render windows.” Your test oracle is journalctl output, dmesg, and QEMU monitor screendumps. You learn to read boot logs carefully.

The two active branches for the DRM backend have incompatible buffer ownership models. One uses self-owning buffers where the DrmBuffer manages its own GBM/DRM lifecycle. The other has the HWInterface manage buffer lifetime externally. Both approaches work, but they conflict in the core files (DrmBuffer.cpp, DrmHWInterface.cpp) and cannot be merged until one design proves out under real page-flip stress testing. This is the kind of design tension that only resolves with actual runtime data.

Cross-distro build tooling is an ongoing source of friction. The build scripts assume Debian paths for GRUB, so developing on openSUSE means creating symlinks from grub2-mkstandalone to grub-mkstandalone and faking the /usr/lib/grub/i386-pc directory structure. The Haiku resource compiler tools (rc/xres) are another build dependency that has to come from somewhere. None of this is hard, but it is the kind of thing that costs you an afternoon the first time.

The decision to implement BeOS primitives as kernel modules rather than userspace shims was correct. Ports need to block in the kernel for proper sleep/wake semantics. Semaphores need kernel-level lifecycle management so they get cleaned up when a process exits unexpectedly. Shared memory areas need kernel involvement for protection and cross-process mapping. You can fake some of this with futexes and shared memory segments, but you end up reimplementing half a kernel subsystem in userspace and it is fragile. Keeping Nexus thin (just character devices and ioctls, no new syscalls) has been a good tradeoff between correctness and maintainability.

what is next#

The per-window GPU compositing work is the current focus. That means a WindowBuffer helper class, DrawingEngine modifications for per-window rendering, multi-texture support in the GLCompositor, and orchestration in the Desktop class to drive the composite pass. Once that lands, window management becomes a composition problem instead of a redraw problem.

After that, filesystem indexing and live queries. This was one of BeOS’s best features. The filesystem maintained indexes on file attributes, and you could run queries against them that returned live result sets updating in real time. Implementing this on top of Linux with extended attributes and inotify/fanotify is the plan, but there is real design work to figure out the right indexing strategy.

The current development strategy is to iterate fast in VMs. QEMU/KVM with virtio-gpu gives a tight enough feedback loop to build out the feature set, the compositor, the window management, the UI toolkit behavior, without getting bogged down in bare-metal driver issues. Once the core experience is solid in a VM, bringing it to actual hardware is a much more tractable problem because you know exactly what the software is supposed to do and you are just chasing driver and timing issues rather than design questions.

On the UI side, V\OS running the BeOS API on Linux means the entire BView/BWindow/BControl widget hierarchy is available. I like how the toolkit is structured. It is object-oriented in a way that feels natural rather than ceremonial, and the message-passing integration means widgets communicate through the same IPC system as everything else. There are some ideas I want to explore around modern widget components and higher-level UI patterns that BeOS never got to, things that would make application development on V\OS feel contemporary without abandoning the simplicity of the original API. Some of those lifts are substantial, but they are the kind of work that compounds.

Testing images will be published when the graphics pipeline stabilizes.

The code is at github.com/VitruvianOS.