The Timeless Stack is about reliable computing. We want to remove the “magic” from things by always making a complete list of what goes into operations, making sure that list is precise, and thereby making processes consistent and repeatable.
The Timeless Stack is also based around a belief that code that isn’t run or build instructions that aren’t executable will almost certainly be broken (or if they aren’t yet, will be soon; entropy marches on).
Concretely, this means everything must be code. A snippet of “leftovers” in a README file which must be managed by humans before the real build can begin is Not Acceptable.
The Timeless Stack tools are meant to work anywhere, anywhen. Centralized systems are fragile; we avoid them at all opportunities.
More important than the technology though is the principles of how we use it:
Zero-ambiguity environment: the Timeless Stack is developed on the principle of “precise-by-default”.
Deep-time reproducibility: the Timeless Stack represents a commitment to reproducible results today, tomorrow, next week, next year, and… you get the picture.
Control over data flow: Unlike other container systems, in the Timeless Stack you can compose filesystem trees how you want: multiple inputs, in any order; and explicitly declare sections of filesystem that are useful results to export (meaning just as importantly, you can choose what files to leave behind). Granular control lets you build pipelines that are clean, explicit, and fast. More importantly, it lets us reason about our processes, and thus scale up our ability to share.
Labeling instead of contamination: The Timeless Stack configuration explicitly enforces a split between << data identity >> and << data naming >>. We work with hashes as primary identifiers, which it easy to decentralize any processing built with the Timeless Stack.
All of these properties come together towards two big goals: