I think we shouldn't[1] be making Operating Systems, per se, but something like Operating Environments.
An Operating Environment (OE) would be a new interface, maybe shell and APIs to access file systems, devices, libraries and such -- possibly one that can be just launched as an application in your host OS. That way you can reuse all facilities provided by the host OS and present them in new, maybe more convenient ways. I guess Emacs is a sort of Operating Environment, as browsers as well. 'Fantasy computers' are also Operating Environments, like pico-8, mini micro[2], uxn, etc..
Of course, if you really have great a low-level reason to reinvent the way things are done (maybe to improve security, efficiency, DX, or all of that), then go ahead :)
The main reasons is the difficulty in developing robust low-level systems like file systems, the large number of processors you may want to support, and also creating or porting a huge number of device drivers. At this point Linux for example supports a huge number of devices (of course you could use some sort of compatibility layer). Also, developing a new UX is very different from developing a new low-level architecture (and you can just plug the UX into existing OSes).
In most cases an OS shell (and an OE), from the user point of view, is "just" a good way of finding and launching applications. Maybe a way of finding and managing files if you count the file manager in. It shouldn't get too much in the way and be the center of attention, I guess. (This contrasts with its low level design, which has a large number functions, APIs, etc.). But also it should probably be (in different contexts) cozy, comfortable, beautiful, etc. (because why not?). A nice advanced feature is the ability to automate things and run commands programmatically, which command shells tend to have by default but are more lacking in graphical shells. And I'm sure there is still a lot to explore in OS UX...
[1] I mean, unless you really have a reason with all caveats in mind of course.
I think there's value in exploring operating systems and environments. And, it's very useful to note that you don't need to do both at the same time. This strikes me as an unneccessary worry though:
> The main reasons is the difficulty in developing robust low-level systems like file systems, the large number of processors you may want to support, and also creating or porting a huge number of device drivers. At this point Linux for example supports a huge number of devices (of course you could use some sort of compatibility layer).
As a start, you simply don't need to support all of this, and you don't even need to aspire to support it all. Support virtio versions of the relevant hardware class and try to run on reasonable hypervisors. Support one or two physical devices of the relevant hardware class to run on a real machine.
If you can plugin to an existing driver abstraction and pull in external drivers, great. NDIS is one option there.
If your OS is interesting, you can figure our the driver situation, and the cpu architecture situation. It would sure be neat to run on a raspberry pi, but only running on x86 isn't a big limitation.
> I think there's value in exploring operating systems and environments. And, it's very useful to note that you don't need to do both at the same time.
Although they can be done separately, I think there is also a value in doing both together, since the design can be made to work with both together better. Furthermore, they can be designed together with the hardware as well, to work together all three better.
Sorry, is "operating environment" a known and used term?
Completely agree with the sentiment here, but have not seen this term used to describe what should almost certainly be distinguished as its own concept to facilitate these kinds of conversations. OE is great terminology.
Some of these are operating systems in the most literal sense. Some are environments that aren't concerned with starting processes or reading disk blocks.
I think there are benefits of operating systems and of operating environments. There is more than finding and launching applications; there is ways of applications interacting with the system and with each other, as well as with the operator. Both low-level and high-level design are relevant, and should be made to work better together. There is also the computer hardware design; some ideas and functions of an operating system should work better when the hardware design and operating system work better with each other.
My ideas involve avoiding many of the features of modern systems.
Mine does not use: ambient authority, Unicode, file names, environment variables, command-line arguments (although there is an "initial message" which has a similar use), USB, spyware, worthless excessive animations (some animations are useful (but should still be possible to adjust the speed and disable them entirely), such as objects moving around so that you can more easily see where they went to), etc.
However, as important as what to be avoided, also are things which it would include, such as:
- Separate Command and Control key (like Macintosh has). There are other differences of the keyboard as well, and also differences in the working of the keyboard manager program. The application can request the application keyboard mode, which can be TRON code, or a specialized 8-bit character set (e.g. APL), or command mode, or game mode, or hybrid mode.
- Deterministic execution. A program's execution with the exception of messages sent/received by I/O is deterministic (and the I/O can always be overridden, so if the input is recorded and replayed, the output will also be the same, even if you run it on a different computer).
- Capability-based security with proxy capabilities. All I/O must use capabilities (a program that no longer has any usable capabilities is automatically terminated (unless a debugger is attached); this is one of the two "normal" ways to terminate a program, the other being to do an uninterruptible wait for any one of an empty set of objects). All messages between capabilities consist of bytes and/or capabilities, and when a program starts it receives an initial message, which should contain some capabilities (since if it doesn't, the program is immediately terminated unless a debugger is attached). A program can also create its own "proxy capabilities" and give those instead of the original one; the receiver cannot tell the difference.
- Deterministic execution and proxy capabilities are also useful for such things as: software testing, resist fingerprinting, etc.
- File system with forks. The forks are identified by 32-bit numbers, and the 16-bit numbers have a common meaning while higher numbers have meanings specific to the files. The file management functions of the system allow accessing and dealing with all of the forks. One of these forks is used for specifying the expected type of the initial message (this is like having typed command-line arguments instead of being untyped, but it is also used to provide capabilities for I/O, and other stuff).
- Transactions and locks on multiple objects at once (including objects in the file system, but also remote objects).
- Hypertext file system. Files can contain links to other files. There are no directory structures; you can use hyperlinks instead.
- Versioned file system. A link to a file can also optionally be for a specific version of the file. This means that one of the forks can be used to make a chain of the versions of the file.
- Extended TRON character code.
- A common data format used for most files as well as for a lot of the data being communicated between programs. This is a type/length/value format, a bit like DER but with many differences. This can allow potentially any files to contain potentially any kinds of data, and has a common representation for many types which can then be extended by the use of extension types. It also allows formatted text (rather than only plain text) to be used in many more parts of the system, and allows data to more easily be transferred between programs on the system. In addition, this allows the system to work more consistently.
- Tagged data and extensions. The common format would also allow such things like tagged data, e.g. that a number is some value according to a unit of measurement (that you can then do things with it e.g. automatically convert them to other units of measurement). Extensions are also possible.
- "Reveal Codes". If the file can contain formatted text, then revealing and dealing with the formatting codes directly is necessary during editing, even if the formatted text can also be displayed without revealing the codes. WYSIWYG without reveal codes is no good; and, actually, it would not quite be WYSIWYG anyways (since the formatting does not necessarily appear the same everywhere and is not intended to do so) (although there can be print preview as well, in case you actually do want to preview exactly what it looks like, to be more WYSIWYG).
- Command, Automation, and Query Language. This is a programming language which is also used for the command shell, and the common type/length/value format can be queried and manipulated with this. It can also be used to create proxy capabilities, and for communication between programs. Many kinds of mathematical and scientific functions are also available (for example, it supports big integers and big rationals, as well as matrices, etc). Functions from other programs can also be copied and automated, e.g. a data table displayed by another program can be queried, the function of a command button can be placed inside of a loop with other commands (delays, conditions, etc), etc.
- Better i18n, m17n, l10n, a11y, etc. Paper size settings do not belong in the locale setting (they belong in the printer driver configuration instead). Date/time formats do belong with the locale setting but should not be identified by language; instead an application program can call the i18n functions to format the date/time without needing an identifier. Telephone number formats do not belong with the locale setting either; this information is a combination of the data being handled and the modem configuration. There are many other improvements to be made as well; simply translating text, date/time, etc, is not good enough. Furthermore, accessibility features can be useful for everyone, and are not only for blind people etc.
- Space-age time keeping.
- Window indicators, which are associated with the capabilities of that window and can be used to monitor and control them. Some might also be usable for arranging windows, making tabbed windows and separating them, etc. This means that the application programs do not need to do many of these things, which makes the system working more consistently as well as not requiring application programmers to add all of the functions (and those that will be added can often be easier to do).
- Improvements of the C programming language.
- All functions can be used by the command shell and most also can be used by GUI, so both are available. Command shell and GUI also can be used together in a way that works much better than most other systems do, because you can easily move and copy data between them (see above about Command, Automation, and Query Language).
- If a new version of the CPU is available with new instructions, the new version of the operating system can emulate those instructions.
- Don't try to hide stuff from the operator; allow the operator to decide what kind of security they want and to program what the operation of the computer should do. Everything must be documented, and the core system must be FOSS as well, that it can also be examined, understood, altered, etc. The system also can be customized according to your use (including colours, fonts, font sizes, window management, keyboard layouts, substitutions of various things (including proxy capabilities), etc).
I believe confusing UI with OS is a mistake Windows users are still paying to this day. Thanks to NeXT, Mac users don't have this torment since MacOS 9.
It's an interesting idea, I've never tried it. It could also be a simple VM running a given OS (with reduced need for hardware support), or an application running interpreted code with the environment UI.
Generally the idea is to expand UX in more interesting/fun directions without incurring significant hurdles associated with an OS (where everything needs to run reliably and be secure).
I had the privilege to work as a junior operator in the 80’s, and got exposed to some strange systems .. Tandem and Wang and so on .. and I always wondered if those weird Wang Imaging System things were out there, in an emulator somewhere, to play with, as it seemed like a very functional system for archive digitalization.
As a retro-computing enthusiast/zealot, for me personally it is often quite rewarding to revisit the ‘high concept execution environments’ of different computing era. I have a nice, moderately sized retro computing collection, 40 machines or so, and I recently got my SGI systems re-installed and set up for playing. Revisiting Irix after decades away from it is a real blast.
as a fellow dinosaur and a hobbyist, I concur. Especially SGI's. For those that didn't know, MAME (of all things) can run IRIX to an extent https://sgi.neocities.org/
The one I'd like to see working is the IBM 3193. Few people know IBM had graphics terminals and the 3270 protocol has provisions for high-res images going to/from the terminal.
It might not be super unique but is a truly from-scratch "common" operating system built in public, which for me at least puts it at the position of a reference of an OS of whose code one person can fully understand if they'd want to understand the codebase of a whole complete-looking OS.
“Novel” is likely meant in the sense of “novel concept” or “novel approach”. Serenity is great, but it isn’t trying to do anything novel in the OS space.
The cost of not having proper sandboxing is hard to overstate. Think of all the effort that has gone into linux containers, or VMs just to run another Linux kernel, all because sandboxing was an afterthought.
Then there's the stagnation in filesystems and networking, which can be at least partially attributed to the development frictions associated with a monolithic kernel. Organizational politics is interfering with including a filesystem in the Linux kernel right now.
I don't really understand or appreciate a distinction. The seL4 design was used as a starting point and small changes were made mostly as a matter of API convenience. I consider the design of an operating system to be by far the most difficult part, and the typing to be less impressive/important.
Helios hasn't done anything novel in terms of operating system design. It's taken an excellent design and reimplemented it in with a more modern language and built better tooling around it. I tend to point people towards the Helios project instead of seL4 because I think the tooling (especially around drivers) is so much better that it's not even a close comparison for productivity. It's where the open source OS community should be concentrating efforts.
Usually "based on" means the original codebase is mirrored/extended. Arguably if what you say is true, that is Helios' design has minor differences to seL4, then "based on" in reference to the design is indeed better description than "inspired from" which makes it sound (imo) to have significant changes.
Are there any operating systems designed from the ground up to support and fully utilize many processor systems?
I'm thinking systems designed based on the assumption that there are tens, hundreds or even thousands of processors, and design assumptions are made at every level to leverage that availability
The RoarVM [1] is a research project that showed how to run Squeak Smalltalk on thousands of cores (at one point it ran on 10,000 cores).
I'm re-implementing it as a metacircular adaptive compiler and VM for a production operating system. We rewrite the STEPS research software and the Frank code [2] on a million core environment [3]. On the M4 processor we try to use all types of cores, CPU, GPU, neural engine, video hardware, etc.
I think you're reaching towards the concept of a Single System Image [1] system. Such a system is a cluster of many computers, but you can interact with it as if it was a single computer.
But mainstream servers manage hundreds of processor cores these days. The Epyc 9965 has 192 cores, and you can put it in an off the shelf dual socket board for 384 cores total (and two SMT threads per core if you want to count that way). Thousands of core would need exotic hardware, even a quad socket Epyc wouldn't quite get you there and afaik, nobody makes those, an 8 socket Epyc would be madness.
you can build these without shared memory using standard distributed database techniques for serializability and fault tolerance. i dont think its a particularly good idea. there's nothing great about running 'ps' and getting half a million entries. using the unix user/group model isn't great for managing resources. its not even that great to log in to start jobs. the only thing your gaining is familiarity.
building better abstractions - kuberenetes is an example, although i certainly hope we dont keep being stuck there - is probably a better use of time
It's not a true OS--but it's a platform on top of an arbitrary number of nodes that act as one.
The cool thing is that from the program's perspective you don't have to worry about the distributed system running underneath--the program just thinks it's running on an arbitrarily large machine.
Yes, to a degree, but probably not quite like you're thinking. The super computers and HPC clusters are highly tuned for the hardware they use which can have thousands of CPUs. But ultimately the "OS" that controls them takes on a bit of a different meaning in those contexts.
Ultimately, the OS has to be designed for the hardware/architecture it's actually going to run on, and not strictly just a concept like "lots of CPUs". How the hardware does interprocess communication, cache and memory coherency, interrupt routing, etc... is ultimately going to be the limiting factor, not the theoretical design of the OS. Most of the major OSs already do a really good job of utilizing the available hardware for most typical workloads, and can be tuned pretty well for custom workloads.
I added support for up to 254 CPUs on the kernel I work on, but we haven't taken advantage of NUMA yet as we don't really need to because the performance hit for our workloads is negligible. But the Linux's and BSD's do, and can already get as much performance out of the system as the hardware will allow.
Modern OSs are already designed with parallelism and concurrency in mind, and with the move towards making as many of the subsystems as possible lockless, I'm not sure there's much to be gained by redesigning everything from the ground up. It would probably look a lot like it does now.
There have certainly been research operating systems for large cache-coherent multiprocessors. For example, IBM's K42 and ETH Zürich's Barrelfish. Both had been designed to separate the kernel state at each core from the others' by using message passing between cores instead of shared data structures.
Well, there were Momenta and PenPoint --- the latter in particular focused on Notebooks which felt quite different, and Apple's Newton was even more so.
Oberon looks/feels strikingly different (and is _tiny_) and can be easily tried out via quite low-level emulation (and just wants some drivers to be fully native say on a Raspberry Pi)
As a kernel programmer I find it so lame that when people say "Operating Systems" what they're thinking is just the superficial layer: GUI interfaces, Desktop Managers and UX in general. As if the only things that could have OS were desktop computers, laptops, tablets and smartphones.
What about more specialized devices? e-readers, wifi-routers, smartwatches (hey, hello open sourced PebbleOS), all sorts of RTOS based things, etc? Isn't anything interesting happening there?
This list could be longer! I expected much more, given that CS students and hobbyists are doing this sort of thing often. Maybe the format is too verbose?
Honestly love seeing people obsess over old or weird OS stuff - makes me want to poke around in my own cluttered laptop folders just to see what weird bits I still have tucked away.
Don’t try to force your values on other people. In the end your time spent with friends is just as meaningless as their time spent developing an obscure OS.
There exist many OSes (and UI designs) based on non-mainstream concepts. Many abandoned, forgotten, @ design time suitable hardware didn't exist, no software to take advantage of it, etc etc.
A 'simple' retry at achieving such alternate vision could be very successful today due to changed environment, audience, or available hardware.
MercuryOS reminds me of the Apple Lisa - The way it managed applications invisibly was a step in the direction of selecting tools based on intentions. It was a document-centric system, which MercuryOS isn't, but a step in the same direction.
For some time, Windows 95 (IIRC) had a Templates folder. You'd put documents in it and you could right-click a folder and select New->Invoice or something similar based on what you had in the Templates folder. It was similar to Lisa's Stationery metaphor.
MercuryOS jumped out at me too, digging around the site I really started to imagine using it. It does not appear to have gone beyond the design (which was where the creators intended to stop it seems). It's a re-imagining of HCI than an OS as a whole. It caught a fair bit of unfair flack previously imo: https://news.ycombinator.com/item?id=35777804
I think we shouldn't[1] be making Operating Systems, per se, but something like Operating Environments.
An Operating Environment (OE) would be a new interface, maybe shell and APIs to access file systems, devices, libraries and such -- possibly one that can be just launched as an application in your host OS. That way you can reuse all facilities provided by the host OS and present them in new, maybe more convenient ways. I guess Emacs is a sort of Operating Environment, as browsers as well. 'Fantasy computers' are also Operating Environments, like pico-8, mini micro[2], uxn, etc..
Of course, if you really have great a low-level reason to reinvent the way things are done (maybe to improve security, efficiency, DX, or all of that), then go ahead :)
The main reasons is the difficulty in developing robust low-level systems like file systems, the large number of processors you may want to support, and also creating or porting a huge number of device drivers. At this point Linux for example supports a huge number of devices (of course you could use some sort of compatibility layer). Also, developing a new UX is very different from developing a new low-level architecture (and you can just plug the UX into existing OSes).
In most cases an OS shell (and an OE), from the user point of view, is "just" a good way of finding and launching applications. Maybe a way of finding and managing files if you count the file manager in. It shouldn't get too much in the way and be the center of attention, I guess. (This contrasts with its low level design, which has a large number functions, APIs, etc.). But also it should probably be (in different contexts) cozy, comfortable, beautiful, etc. (because why not?). A nice advanced feature is the ability to automate things and run commands programmatically, which command shells tend to have by default but are more lacking in graphical shells. And I'm sure there is still a lot to explore in OS UX...
[1] I mean, unless you really have a reason with all caveats in mind of course.
[2] https://miniscript.org/MiniMicro/index.html#about
I think there's value in exploring operating systems and environments. And, it's very useful to note that you don't need to do both at the same time. This strikes me as an unneccessary worry though:
> The main reasons is the difficulty in developing robust low-level systems like file systems, the large number of processors you may want to support, and also creating or porting a huge number of device drivers. At this point Linux for example supports a huge number of devices (of course you could use some sort of compatibility layer).
As a start, you simply don't need to support all of this, and you don't even need to aspire to support it all. Support virtio versions of the relevant hardware class and try to run on reasonable hypervisors. Support one or two physical devices of the relevant hardware class to run on a real machine.
If you can plugin to an existing driver abstraction and pull in external drivers, great. NDIS is one option there.
If your OS is interesting, you can figure our the driver situation, and the cpu architecture situation. It would sure be neat to run on a raspberry pi, but only running on x86 isn't a big limitation.
> I think there's value in exploring operating systems and environments. And, it's very useful to note that you don't need to do both at the same time.
Although they can be done separately, I think there is also a value in doing both together, since the design can be made to work with both together better. Furthermore, they can be designed together with the hardware as well, to work together all three better.
Sorry, is "operating environment" a known and used term?
Completely agree with the sentiment here, but have not seen this term used to describe what should almost certainly be distinguished as its own concept to facilitate these kinds of conversations. OE is great terminology.
Some of these are operating systems in the most literal sense. Some are environments that aren't concerned with starting processes or reading disk blocks.
I think there are benefits of operating systems and of operating environments. There is more than finding and launching applications; there is ways of applications interacting with the system and with each other, as well as with the operator. Both low-level and high-level design are relevant, and should be made to work better together. There is also the computer hardware design; some ideas and functions of an operating system should work better when the hardware design and operating system work better with each other.
My ideas involve avoiding many of the features of modern systems.
Mine does not use: ambient authority, Unicode, file names, environment variables, command-line arguments (although there is an "initial message" which has a similar use), USB, spyware, worthless excessive animations (some animations are useful (but should still be possible to adjust the speed and disable them entirely), such as objects moving around so that you can more easily see where they went to), etc.
However, as important as what to be avoided, also are things which it would include, such as:
- Separate Command and Control key (like Macintosh has). There are other differences of the keyboard as well, and also differences in the working of the keyboard manager program. The application can request the application keyboard mode, which can be TRON code, or a specialized 8-bit character set (e.g. APL), or command mode, or game mode, or hybrid mode.
- Deterministic execution. A program's execution with the exception of messages sent/received by I/O is deterministic (and the I/O can always be overridden, so if the input is recorded and replayed, the output will also be the same, even if you run it on a different computer).
- Capability-based security with proxy capabilities. All I/O must use capabilities (a program that no longer has any usable capabilities is automatically terminated (unless a debugger is attached); this is one of the two "normal" ways to terminate a program, the other being to do an uninterruptible wait for any one of an empty set of objects). All messages between capabilities consist of bytes and/or capabilities, and when a program starts it receives an initial message, which should contain some capabilities (since if it doesn't, the program is immediately terminated unless a debugger is attached). A program can also create its own "proxy capabilities" and give those instead of the original one; the receiver cannot tell the difference.
- Deterministic execution and proxy capabilities are also useful for such things as: software testing, resist fingerprinting, etc.
- File system with forks. The forks are identified by 32-bit numbers, and the 16-bit numbers have a common meaning while higher numbers have meanings specific to the files. The file management functions of the system allow accessing and dealing with all of the forks. One of these forks is used for specifying the expected type of the initial message (this is like having typed command-line arguments instead of being untyped, but it is also used to provide capabilities for I/O, and other stuff).
- Transactions and locks on multiple objects at once (including objects in the file system, but also remote objects).
- Hypertext file system. Files can contain links to other files. There are no directory structures; you can use hyperlinks instead.
- Versioned file system. A link to a file can also optionally be for a specific version of the file. This means that one of the forks can be used to make a chain of the versions of the file.
- Extended TRON character code.
- A common data format used for most files as well as for a lot of the data being communicated between programs. This is a type/length/value format, a bit like DER but with many differences. This can allow potentially any files to contain potentially any kinds of data, and has a common representation for many types which can then be extended by the use of extension types. It also allows formatted text (rather than only plain text) to be used in many more parts of the system, and allows data to more easily be transferred between programs on the system. In addition, this allows the system to work more consistently.
- Tagged data and extensions. The common format would also allow such things like tagged data, e.g. that a number is some value according to a unit of measurement (that you can then do things with it e.g. automatically convert them to other units of measurement). Extensions are also possible.
- "Reveal Codes". If the file can contain formatted text, then revealing and dealing with the formatting codes directly is necessary during editing, even if the formatted text can also be displayed without revealing the codes. WYSIWYG without reveal codes is no good; and, actually, it would not quite be WYSIWYG anyways (since the formatting does not necessarily appear the same everywhere and is not intended to do so) (although there can be print preview as well, in case you actually do want to preview exactly what it looks like, to be more WYSIWYG).
- Command, Automation, and Query Language. This is a programming language which is also used for the command shell, and the common type/length/value format can be queried and manipulated with this. It can also be used to create proxy capabilities, and for communication between programs. Many kinds of mathematical and scientific functions are also available (for example, it supports big integers and big rationals, as well as matrices, etc). Functions from other programs can also be copied and automated, e.g. a data table displayed by another program can be queried, the function of a command button can be placed inside of a loop with other commands (delays, conditions, etc), etc.
- Better i18n, m17n, l10n, a11y, etc. Paper size settings do not belong in the locale setting (they belong in the printer driver configuration instead). Date/time formats do belong with the locale setting but should not be identified by language; instead an application program can call the i18n functions to format the date/time without needing an identifier. Telephone number formats do not belong with the locale setting either; this information is a combination of the data being handled and the modem configuration. There are many other improvements to be made as well; simply translating text, date/time, etc, is not good enough. Furthermore, accessibility features can be useful for everyone, and are not only for blind people etc.
- Space-age time keeping.
- Window indicators, which are associated with the capabilities of that window and can be used to monitor and control them. Some might also be usable for arranging windows, making tabbed windows and separating them, etc. This means that the application programs do not need to do many of these things, which makes the system working more consistently as well as not requiring application programmers to add all of the functions (and those that will be added can often be easier to do).
- Improvements of the C programming language.
- All functions can be used by the command shell and most also can be used by GUI, so both are available. Command shell and GUI also can be used together in a way that works much better than most other systems do, because you can easily move and copy data between them (see above about Command, Automation, and Query Language).
- If a new version of the CPU is available with new instructions, the new version of the operating system can emulate those instructions.
- Don't try to hide stuff from the operator; allow the operator to decide what kind of security they want and to program what the operation of the computer should do. Everything must be documented, and the core system must be FOSS as well, that it can also be examined, understood, altered, etc. The system also can be customized according to your use (including colours, fonts, font sizes, window management, keyboard layouts, substitutions of various things (including proxy capabilities), etc).
- And even more things than just this, too.
It sounds to me like you are describing Linux desktops.
Or Windows up to Windows ME.
I believe confusing UI with OS is a mistake Windows users are still paying to this day. Thanks to NeXT, Mac users don't have this torment since MacOS 9.
I'm assuming something more like Qubes, with proper isolation between groups of processes.
It's an interesting idea, I've never tried it. It could also be a simple VM running a given OS (with reduced need for hardware support), or an application running interpreted code with the environment UI.
Generally the idea is to expand UX in more interesting/fun directions without incurring significant hurdles associated with an OS (where everything needs to run reliably and be secure).
I've found this quite interesting related discussion: "Classical "Single user computers" were a flawed or at least limited idea" https://utcc.utoronto.ca/~cks/space/blog/tech/SingleUserComp... (discussion on lobste.rs: https://lobste.rs/s/plkdy5/classical_single_user_computers_w...)
I had the privilege to work as a junior operator in the 80’s, and got exposed to some strange systems .. Tandem and Wang and so on .. and I always wondered if those weird Wang Imaging System things were out there, in an emulator somewhere, to play with, as it seemed like a very functional system for archive digitalization.
As a retro-computing enthusiast/zealot, for me personally it is often quite rewarding to revisit the ‘high concept execution environments’ of different computing era. I have a nice, moderately sized retro computing collection, 40 machines or so, and I recently got my SGI systems re-installed and set up for playing. Revisiting Irix after decades away from it is a real blast.
as a fellow dinosaur and a hobbyist, I concur. Especially SGI's. For those that didn't know, MAME (of all things) can run IRIX to an extent https://sgi.neocities.org/
The one I'd like to see working is the IBM 3193. Few people know IBM had graphics terminals and the 3270 protocol has provisions for high-res images going to/from the terminal.
https://ifdesign.com/en/winner-ranking/project/datensichtger...
This list should include SerenityOS IMHO.
It might not be super unique but is a truly from-scratch "common" operating system built in public, which for me at least puts it at the position of a reference of an OS of whose code one person can fully understand if they'd want to understand the codebase of a whole complete-looking OS.
“Novel” is likely meant in the sense of “novel concept” or “novel approach”. Serenity is great, but it isn’t trying to do anything novel in the OS space.
Also GenodeOS, which is in fact fairly unique.
> This list should include...
And a few dozen others as well.
Notably missing from this list are seL4 and Helios which is based on it.
https://ares-os.org/docs/helios/
The cost of not having proper sandboxing is hard to overstate. Think of all the effort that has gone into linux containers, or VMs just to run another Linux kernel, all because sandboxing was an afterthought.
Then there's the stagnation in filesystems and networking, which can be at least partially attributed to the development frictions associated with a monolithic kernel. Organizational politics is interfering with including a filesystem in the Linux kernel right now.
It's not based on it, but inspired from it.
Helios was written from scratch.
I don't really understand or appreciate a distinction. The seL4 design was used as a starting point and small changes were made mostly as a matter of API convenience. I consider the design of an operating system to be by far the most difficult part, and the typing to be less impressive/important.
Helios hasn't done anything novel in terms of operating system design. It's taken an excellent design and reimplemented it in with a more modern language and built better tooling around it. I tend to point people towards the Helios project instead of seL4 because I think the tooling (especially around drivers) is so much better that it's not even a close comparison for productivity. It's where the open source OS community should be concentrating efforts.
Usually "based on" means the original codebase is mirrored/extended. Arguably if what you say is true, that is Helios' design has minor differences to seL4, then "based on" in reference to the design is indeed better description than "inspired from" which makes it sound (imo) to have significant changes.
"... whose design is based on it" would seem to cover all the, er, bases.
Are there any operating systems designed from the ground up to support and fully utilize many processor systems?
I'm thinking systems designed based on the assumption that there are tens, hundreds or even thousands of processors, and design assumptions are made at every level to leverage that availability
The RoarVM [1] is a research project that showed how to run Squeak Smalltalk on thousands of cores (at one point it ran on 10,000 cores).
I'm re-implementing it as a metacircular adaptive compiler and VM for a production operating system. We rewrite the STEPS research software and the Frank code [2] on a million core environment [3]. On the M4 processor we try to use all types of cores, CPU, GPU, neural engine, video hardware, etc.
We just applied for YC funding.
[1] https://github.com/smarr/RoarVM
[2] https://www.youtube.com/watch?v=f1605Zmwek8
[3] https://www.youtube.com/watch?v=wDhnjEQyuDk
> I'm re-implementing it as a metacircular adaptive compiler and VM for a production operating system.
You are doing God's work. Thank you.
Good luck with your application.
I played with Squeak a bit [1] and several friends like [2] were also active in converting Squeak in (also) a OS.
[1] https://web.archive.org/web/20231205061256/http://swain.webf...
[2] https://wiki.squeak.org/squeak/1762
I think you're reaching towards the concept of a Single System Image [1] system. Such a system is a cluster of many computers, but you can interact with it as if it was a single computer.
But mainstream servers manage hundreds of processor cores these days. The Epyc 9965 has 192 cores, and you can put it in an off the shelf dual socket board for 384 cores total (and two SMT threads per core if you want to count that way). Thousands of core would need exotic hardware, even a quad socket Epyc wouldn't quite get you there and afaik, nobody makes those, an 8 socket Epyc would be madness.
[1] https://en.m.wikipedia.org/wiki/Single_system_image
you can build these without shared memory using standard distributed database techniques for serializability and fault tolerance. i dont think its a particularly good idea. there's nothing great about running 'ps' and getting half a million entries. using the unix user/group model isn't great for managing resources. its not even that great to log in to start jobs. the only thing your gaining is familiarity.
building better abstractions - kuberenetes is an example, although i certainly hope we dont keep being stuck there - is probably a better use of time
I'm working on GridWhale (https://gridwhale.com).
It's not a true OS--but it's a platform on top of an arbitrary number of nodes that act as one.
The cool thing is that from the program's perspective you don't have to worry about the distributed system running underneath--the program just thinks it's running on an arbitrarily large machine.
There was barrelfish, but it's no longer under development.
https://barrelfish.org/
Yes, to a degree, but probably not quite like you're thinking. The super computers and HPC clusters are highly tuned for the hardware they use which can have thousands of CPUs. But ultimately the "OS" that controls them takes on a bit of a different meaning in those contexts.
Ultimately, the OS has to be designed for the hardware/architecture it's actually going to run on, and not strictly just a concept like "lots of CPUs". How the hardware does interprocess communication, cache and memory coherency, interrupt routing, etc... is ultimately going to be the limiting factor, not the theoretical design of the OS. Most of the major OSs already do a really good job of utilizing the available hardware for most typical workloads, and can be tuned pretty well for custom workloads.
I added support for up to 254 CPUs on the kernel I work on, but we haven't taken advantage of NUMA yet as we don't really need to because the performance hit for our workloads is negligible. But the Linux's and BSD's do, and can already get as much performance out of the system as the hardware will allow.
Modern OSs are already designed with parallelism and concurrency in mind, and with the move towards making as many of the subsystems as possible lockless, I'm not sure there's much to be gained by redesigning everything from the ground up. It would probably look a lot like it does now.
Firmware for GPUs maybe? Not really an OS, but it is software which is built around scheduling and executing on thousands of parallel cores.
There have certainly been research operating systems for large cache-coherent multiprocessors. For example, IBM's K42 and ETH Zürich's Barrelfish. Both had been designed to separate the kernel state at each core from the others' by using message passing between cores instead of shared data structures.
And here I was hoping for a Catalog of Novell Operating Systems. :-(
I can’t help but notice that each of these stubs represent a not-insignificant portion of effort put in by one or more humans.
Indeed. Could have been retitled "Labor of Love OSes"
I would love to see some examples outside of the WIMP-based UI
Well, there were Momenta and PenPoint --- the latter in particular focused on Notebooks which felt quite different, and Apple's Newton was even more so.
Oberon looks/feels strikingly different (and is _tiny_) and can be easily tried out via quite low-level emulation (and just wants some drivers to be fully native say on a Raspberry Pi)
Maybe a catalog of kernels?
There's a non comprehensive list of hobbyist kernels at https://wiki.osdev.org/Projects
MercuryOS towards the bottom is pretty cool
MercuryOS [1, 2] appears to be simply a "speculative vision" with no proof of concept implementation, a manifesto rather than an actual system.
I read through its goals, and it seems that it is against current ideas and metaphors, but without actually suggesting any alternatives.
Perhaps an OS for the AI era, where the user expresses an intent and the AI figures out its meaning and carries it out?
[1] https://www.mercuryos.com/
[2] https://news.ycombinator.com/item?id=35777804 (May 1, 2023, 161 comments)
As a kernel programmer I find it so lame that when people say "Operating Systems" what they're thinking is just the superficial layer: GUI interfaces, Desktop Managers and UX in general. As if the only things that could have OS were desktop computers, laptops, tablets and smartphones.
What about more specialized devices? e-readers, wifi-routers, smartwatches (hey, hello open sourced PebbleOS), all sorts of RTOS based things, etc? Isn't anything interesting happening there?
cool list! i make something web based like this called https://aesthetic.computer
The title should have been "Catalog of UI Demos". It has nothing to do with operating systems.
Desktop Neo was a sick demo, ten years ago. If there was ever a real project that implemented it, I'd be willing to give it a whirl.
This list could be longer! I expected much more, given that CS students and hobbyists are doing this sort of thing often. Maybe the format is too verbose?
TempleOS?
Honestly love seeing people obsess over old or weird OS stuff - makes me want to poke around in my own cluttered laptop folders just to see what weird bits I still have tucked away.
[flagged]
Don’t try to force your values on other people. In the end your time spent with friends is just as meaningless as their time spent developing an obscure OS.
No thanks :)
Why the "novel" qualifier?
There exist many OSes (and UI designs) based on non-mainstream concepts. Many abandoned, forgotten, @ design time suitable hardware didn't exist, no software to take advantage of it, etc etc.
A 'simple' retry at achieving such alternate vision could be very successful today due to changed environment, audience, or available hardware.
MercuryOS reminds me of the Apple Lisa - The way it managed applications invisibly was a step in the direction of selecting tools based on intentions. It was a document-centric system, which MercuryOS isn't, but a step in the same direction.
For some time, Windows 95 (IIRC) had a Templates folder. You'd put documents in it and you could right-click a folder and select New->Invoice or something similar based on what you had in the Templates folder. It was similar to Lisa's Stationery metaphor.
MercuryOS jumped out at me too, digging around the site I really started to imagine using it. It does not appear to have gone beyond the design (which was where the creators intended to stop it seems). It's a re-imagining of HCI than an OS as a whole. It caught a fair bit of unfair flack previously imo: https://news.ycombinator.com/item?id=35777804
Calling it an OS is inaccurate. It could be an application.