44 stories
·
4 followers

Why Kubernetes Is the New Application Server

1 Share

Why Kubernetes is The New Application Server

alt

Have you ever wondered why you are deploying your multi-platform applications using containers? Is it just a matter of “following the hype”? In this article, I’m going to ask some provocative questions to make my case for Why Kubernetes is the new application server.

You might have noticed that the majority of languages are interpreted and use “runtimes” to execute your source code. In theory, most Node.js, Python, and Ruby code can be easily moved from one platform (Windows, Mac, Linux) to another platform. Java applications go even further by having the compiled Java class turned into a bytecode, capable of running anywhere that has a JVM (Java Virtual Machine).

The Java ecosystem provides a standard format to distribute all Java classes that are part of the same application. You can package these classes as a JAR (Java Archive), WAR (Web Archive), and EAR (Enterprise Archive) that contains the front end, back end, and libraries embedded. So I ask you: Why do you use containers to distribute your Java application? Isn’t it already supposed to be easily portable between environments?

Answering this question from a developer perspective isn’t always obvious. But think for a moment about your development environment and some possible issues caused by the difference between it and the production environment:

  • Do you use Mac, Windows, or Linux? Have you ever faced an issue related to \ versus / as the file path separator?
  • What version of JDK do you use? Do you use Java 10 in development, but production uses JRE 8? Have you faced any bugs introduced by  JVM differences?
  • What version of the application server do you use? Is the production environment using the same configuration, security patches, and library versions?
  • During production deployment, have you encountered a JDBC driver issue that you didn’t face in your development environment due to different versions of the driver or database server?
  • Have you ever asked the application server admin to create a datasource or a JMS queue and it had a typo?

All the issues above are caused by factors external to your application, and one of the greatest things about containers is that you can deploy everything (for example, a Linux distribution, the JVM, the application server, libraries, configurations and, finally, your application) inside a pre-built container. Plus, executing a single container that has everything built in is incredibly easier than moving your code to a production environment and trying to resolve the differences when it doesn’t work. Since it’s easy to execute, it is also easy to scale the same container image to multiple replicas.

Empowering Your Application

Before containers became very popular, several NFR (non-functional requirements) such as security, isolation, fault tolerance, configuration management, and others were provided by application servers. As an analogy, the application servers were planned to be to applications what CD (Compact Disc) players are to CDs.

As a developer, you would be responsible to follow a predefined standard and distribute the application in a specific format, while on the other hand the application server would “execute” your application and give additional capabilities that could vary from different “brands.”  Note: In the Java world, the standard for enterprise capabilities provided by an application server has recently moved under the Eclipse foundation. The work on Eclipse Enterprise for Java (EE4J), has resulted in Jakarta EE.  (For more info, read the article Jakarta EE is officially out or watch the DevNation video: Jakarta EE: The future of Java EE.)

Following the same CD player analogy, with the ascension of containers, the container image has become the new CD format. In fact, a container image is nothing more than a format for distributing your containers. (If you need to get a better handle on what container images are and how they are distributed see A Practical Introduction to Container Terminology.)

The real benefits of containers happen when you need to add enterprise capabilities to your application. And the best way to provide these capabilities to a containerized application is by using Kubernetes as a platform for them. Additionally, the Kubernetes platform provides a great foundation for other projects such as Red Hat OpenShift, Istio, and Apache OpenWhisk to build on and make it easier to build and deploy robust production quality applications.

Let’s explore nine of these capabilities:

alt

1 – Service Discovery

Service discovery is the process of figuring out how to connect to a service.  To get many of the benefits of containers and cloud-native applications, you need to remove configuration from your container images so you can use the same container image in all environments. Externalized configuration from applications is one of the key principles of the 12-factor application. Service discovery is one of the ways to get configuration information from the runtime environment instead of it being hardcoded in the application. Kubernetes provides service discovery out of the box. Kubernetes also provides ConfigMaps and Secrets for removing configuration from your application containers.  Secrets solve some of the challenges that arise when you need to store the credentials for connecting to a service like a database in your runtime environment.

With Kubernetes, there’s no need to use an external server or framework.  While you can manage the environment settings for each runtime environment through Kubernetes YAML files, Red Hat OpenShift provides a GUI and CLI that can make it easier for DevOps teams to manage.

2 – Basic Invocation

Applications running inside containers can be accessed through Ingress access— in other words, routes from the outside world to the service you are exposing. OpenShift provides route objects using HAProxy, which has several capabilities and load-balancing strategies.  You can use the routing capabilities to do rolling deployments. This can be the basis of some very sophisticated CI/CD strategies. See “6 – Build and Deployment Pipelines” below.

What if you need to run a one-time job, such as a batch process, or simply leverage the cluster to compute a result (such as computing the digits of Pi)? Kubernetes provides job objects for this use case. There is also a cron job that manages time-based jobs.

3 – Elasticity

Elasticity is solved in Kubernetes by using ReplicaSets (which used to be called Replication Controllers). Just like most configurations for Kubernetes, a ReplicaSet is a way to reconcile a desired state: you tell Kubernetes what state the system should be in and Kubernetes figures out how to make it so. A ReplicaSet controls the number of replicas or exact copies of the app that should be running at any time.

But what happens when you build a service that is even more popular than you planned for and you run out of compute? You can use the Kubernetes Horizontal Pod Autoscaler, which scales the number of pods based on observed CPU utilization (or, with custom metrics support, on some other application-provided metrics).

4 – Logging

Since your Kubernetes cluster can and will run several replicas of your containerized application, it’s important that you aggregate these logs so they can be viewed in one place. Also, in order to utilize benefits like autoscaling (and other cloud-native capabilities), your containers need to be immutable. So you need to store your logs outside of your container so they will be persistent across runs. OpenShift allows you to deploy the EFK stack to aggregate logs from hosts and applications, whether they come from multiple containers or even from deleted pods.

The EFK stack is composed of:

  • Elasticsearch (ES), an object store where all logs are stored
  • Fluentd, which gathers logs from nodes and feeds them to Elasticsearch
  • Kibana, a web UI for Elasticsearch

5 – Monitoring

Although logging and monitoring seem to solve the same problem, they are different from each other. Monitoring is observation, checking, often alerting, as well as recording. Logging is recording only.

Prometheus is an open-source monitoring system that includes time series database. It can be used for storing and querying metrics, alerting, and using visualizations to gain insights into your systems. Prometheus is perhaps the most popular choice for monitoring Kubernetes clusters. On the Red Hat Developers blog, there are several articles covering monitoring using Prometheus. You can also find Prometheus articles on the OpenShift blog.

You can also see Prometheus in action together with Istio at https://learn.openshift.com/servicemesh/3-monitoring-tracing.

6 – Build and Deployment Pipelines

CI/CD (Continuous Integration/Continuous Delivery) pipelines are not a strict “must have” requirement for your applications. However, CI/CD are often cited as pillars of successful software development and DevOps practices.  No software should be deployed into production without a CI/CD pipeline. The book Continuous Delivery: Reliable Software Releases through Build, Test, and Deployment Automation, by Jez Humble and David Farley, says this about CD: “Continuous Delivery is the ability to get changes of all types—including new features, configuration changes, bug fixes and experiments—into production, or into the hands of users, safely and quickly in a sustainable way.”

OpenShift provides CI/CD pipelines out of the box as a “build strategy.” Check out this video that I recorded two years ago, which has an example of a Jenkins CI/CD pipeline that deploys a new microservice.

7 – Resilience

While Kubernetes provides resilience options for the cluster itself, it can also help the application be resilient by providing PersistentVolumes that support replicated volumes. Kubernetes’ ReplicationControllers/deployments ensure that the specified numbers of pod replicas are consistently deployed across the cluster, which automatically handles any possible node failure.

Together with resilience, fault tolerance serves as an effective means to address users’ reliability and availability concerns. Fault tolerance can also be provided to an application that is running on Kubernetes through Istio by its retries rules, circuit breaker, and pool ejection. Do you want to see it for yourself? Try the Istio Circuit Breaker tutorial at https://learn.openshift.com/servicemesh/7-circuit-breaker.

8 – Authentication

Authentication in Kubernetes can also be provided by Istio through its mutual TLS authentication, which aims to enhance the security of microservices and their communication without requiring service code changes. It is responsible for:

  • Providing each service with a strong identity that represents its role to enable interoperability across clusters and clouds
  • Securing service-to-service communication and end user-to-service communication
  • Providing a key management system to automate key and certificate generation, distribution, rotation, and revocation

Additionally, it is worth mentioning that you can also run Keycloak inside a Kubernetes/OpenShift cluster to provide both authentication and authorization. Keycloak is the upstream product for Red Hat Single Sign-on. For more information, read Single-Sign On Made Easy with Keycloak. If you are using Spring Boot, watch the DevNation video: Secure Spring Boot Microservices with Keycloak or read the blog article.

9 – Tracing

Istio-enabled applications can be configured to collect trace spans using Zipkin or Jaeger. Regardless of what language, framework, or platform you use to build your application, Istio can enable distributed tracing. Check it out at https://learn.openshift.com/servicemesh/3-monitoring-tracing.  See also Getting Started with Istio and Jaeger on your laptop and the recent DevNation video: Advanced microservices tracing with Jaeger.

Are Application Servers Dead?

Going through these capabilities, you can realize how Kubernetes + OpenShift + Istio can really empower your application and provide features that used to be the responsibility of an application server or a software framework such as Netflix OSS. Does that mean application servers are dead?

In this new containerized world, application servers are mutating into becoming more like frameworks. It’s natural that the evolution of software development caused the evolution of application servers. A great example of this evolution is the Eclipse MicroProfile specification having WildFly Swarm as the application server, which provides to the developer features such as fault tolerance, configuration, tracing, REST (client and server), and so on. However, WildFly Swarm and the MicroProfile specification are designed to be very lightweight. WildFly Swarm doesn’t have the vast array of components required by a full Java enterprise application server. Instead, it focuses on microservices and having just enough of the application server to build and run your application as a simple executable .jar file.  You can read more about MicroProfile on this blog.

Furthermore, Java applications can have features such as the Servlet engine, a datasource pool, dependency injection, transactions, messaging, and so forth. Of course, frameworks can provide these features, but an application server must also have everything you need to build, run, deploy, and manage enterprise applications in any environment, regardless of whether they are inside containers. In fact, application servers can be executed anywhere, for instance, on bare metal, on virtualization platforms such as Red Hat Virtualization, on private cloud environments such as Red Hat OpenStack Platform, and also on public cloud environments such as Microsoft Azure or Amazon Web Services.

A good application server ensures consistency between the APIs that are provided and their implementations. Developers can be sure that deploying their business logic, which requires certain capabilities, will work because the application server developers (and the defined standards) have ensured that these components work together and have evolved together. Furthermore, a good application server is also responsible for maximizing throughput and scalability, because it will handle all the requests from the users; having reduced latency and improved load times, because it will help your application’s disposability; be lightweight with a small footprint that minimizes hardware resources and costs; and finally, be secure enough to avoid any security breach. For Java developers, Red Hat provides Red Hat JBoss Enterprise Application Platform, which fulfills all the requirements of a modern, modular application server.

Conclusion

Container images have become the standard packaging format to distribute cloud-native applications. While containers “per se” don’t provide real business advantages to applications, Kubernetes and its related projects, such as OpenShift and Istio, provide the non-functional requirements that used to be part of an application server.

Most of these non-functional requirements that developers used to get from an application server or from a library such as Netflix OSS were bound to a specific language, for example, Java. On the other hand, when developers choose to meet these requirements using Kubernetes + OpenShift + Istio, they are not attached to any specific language, which can encourage the use of the best technology/language for each use case.

Finally, application servers still have their place in software development. However, they are mutating into becoming more like language-specific frameworks that are a great shortcut when developing applications, since they contain lots of already written and tested functionality.

One of the best things about moving to containers, Kubernetes, and microservices is that you don’t have to choose a single application server, framework, architectural style or even language for your application. You can easily deploy a container with JBoss EAP running your existing Java EE application, alongside other containers that have new microservices using Wildfly Swarm, or Eclipse Vert.x for reactive programming. These containers can all be managed through Kubernetes. To see this concept in action, take a look at Red Hat OpenShift Application Runtimes. Use the Launch service to build and deploy a sample app online using WildFly Swarm, Vert.x, Spring Boot, or Node.js. Select the Externalized Configuration mission to learn how to use Kubernetes ConfigMaps. This will get you started on your path to cloud-native applications.

You can say that Kubernetes/OpenShift is the new Linux or even that “Kubernetes is the new application server.” But the fact is that an application server/runtime + OpenShift/Kubernetes + Istio has become the “de facto” cloud-native application platform!


If you haven’t been to the Red Hat Developer site lately, you should check out the pages on:

Rafael Benevides

About the author:

Rafael Benevides is Director of Developer Experience at Red Hat. With many years of experience in several fields of the IT industry, he helps developers and companies all over the world to be more effective in software development. Rafael considers himself a problem solver who has a big love for sharing. He is a member of Apache DeltaSpike PMC—a Duke’s Choice Award winner project—and a speaker in conferences such as JavaOne, Devoxx, TDC, DevNexus, and many others.| LinkedIn | rafabene.com

Share

Let's block ads! (Why?)

Read the whole story
pbouwdewijn
8 days ago
reply
Share this story
Delete

Walkthrough for Systemd Portable Services

1 Share

systemd v239 contains a great number of new features. One of them is first class support for Portable Services. In this blog story I'd like to shed some light on what they are and why they might be interesting for your application.

What are "Portable Services"?

The "Portable Service" concept takes inspiration from classic chroot() environments as well as container management and brings a number of their features to more regular system service management.

While the definition of what a "container" really is is hotly debated, I figure people can generally agree that the "container" concept primarily provides two major features:

  1. Resource bundling: a container generally brings its own file system tree along, bundling any shared libraries and other resources it might need along with the main service executables.

  2. Isolation and sand-boxing: a container operates in a name-spaced environment that is relatively detached from the host. Besides living in its own file system namespace it usually also has its own user database, process tree and so on. Access from the container to the host is limited with various security technologies.

Of these two concepts the first one is also what traditional UNIX chroot() environments are about.

Both resource bundling and isolation/sand-boxing are concepts systemd has implemented to varying degrees for a longer time. Specifically, RootDirectory= and RootImage= have been around for a long time, and so have been the various sand-boxing features systemd provides. The Portable Services concept builds on that, putting these features together in a new, integrated way to make them more accessible and usable.

OK, so what precisely is a "Portable Service"?

Much like a container image, a portable service on disk can be just a directory tree that contains service executables and all their dependencies, in a hierarchy resembling the normal Linux directory hierarchy. A portable service can also be a raw disk image, containing a file system containing such a tree (which can be mounted via a loop-back block device), or multiple file systems (in which case they need to follow the Discoverable Partitions Specification and be located within a GPT partition table). Regardless whether the portable service on disk is a simple directory tree or a raw disk image, let's call this concept the portable service image.

Such images can be generated with any tool typically used for the purpose of installing OSes inside some directory, for example dnf --installroot= or debootstrap. There are very few requirements made on these trees, except the following two:

  1. The tree should carry systemd unit files for relevant services in them.

  2. The tree should carry /usr/lib/os-release (or /etc/os-release) OS release information.

Of course, as you might notice, OS trees generated from any of today's big distributions generally qualify for these two requirements without any further modification, as pretty much all of them adopted /usr/lib/os-release and tend to ship their major services with systemd unit files.

A portable service image generated like this can be "attached" or "detached" from a host:

  1. "Attaching" an image to a host is done through the new portablectl attach command. This command dissects the image, reading the os-release information, and searching for unit files in them. It then copies relevant unit files out of the images and into /etc/systemd/system/. After that it augments any copied service unit files in two ways: a drop-in adding a RootDirectory= or RootImage= line is added in so that even though the unit files are now available on the host when started they run the referenced binaries from the image. It also symlinks in a second drop-in which is called a "profile", which is supposed to carry additional security settings to enforce on the attached services, to ensure the right amount of sand-boxing.

  2. "Detaching" an image from the host is done through portable detach. It reverses the steps above: the unit files copied out are removed again, and so are the two drop-in files generated for them.

While a portable service is attached its relevant unit files are made available on the host like any others: they will appear in systemctl list-unit-files, you can enable and disable them, you can start them and stop them. You can extend them with systemctl edit. You can introspect them. You can apply resource management to them like to any other service, and you can process their logs like any other service and so on. That's because they really are native systemd services, except that they have 'twist' if you so will: they have tougher security by default and store their resources in a root directory or image.

And that's already the essence of what Portable Services are.

A couple of interesting points:

  1. Even though the focus is on shipping service unit files in portable service images, you can actually ship timer units, socket units, target units, path units in portable services too. This means you can very naturally do time, socket and path based activation. It's also entirely fine to ship multiple service units in the same image, in case you have more complex applications.

  2. This concept introduces zero new metadata. Unit files are an existing concept, as are os-release files, and — in case you opt for raw disk images — GPT partition tables are already established too. This also means existing tools to generate images can be reused for building portable service images to a large degree as no completely new artifact types need to be generated.

  3. Because the Portable Service concepts introduces zero new metadata and just builds on existing security and resource bundling features of systemd it's implemented in a set of distinct tools, relatively disconnected from the rest of systemd. Specifically, the main user-facing command is portablectl, and the actual operations are implemented in systemd-portabled.service. If you so will, portable services are a true add-on to systemd, just making a specific work-flow nicer to use than with the basic operations systemd otherwise provides. Also note that systemd-portabled provides bus APIs accessible to any program that wants to interface with it, portablectl is just one tool that happens to be shipped along with systemd.

  4. Since Portable Services are a feature we only added very recently we wanted to keep some freedom to make changes still. Due to that we decided to install the portablectl command into /usr/lib/systemd/ for now, so that it does not appear in $PATH by default. This means, for now you have to invoke it with a full path: /usr/lib/systemd/portablectl. We expect to move it into /usr/bin/ very soon though, and make it a fully supported interface of systemd.

  5. You may wonder which unit files contained in a portable service image are the ones considered "relevant" and are actually copied out by the portablectl attach operation. Currently, this is derived from the image name. Let's say you have an image stored in a directory /var/lib/portables/foobar_4711/ (or alternatively in a raw image /var/lib/portables/foobar_4711.raw). In that case the unit files copied out match the pattern foobar*.service, foobar*.socket, foobar*.target, foobar*.path, foobar*.timer.

  6. The Portable Services concept does not define any specific method how images get on the deployment machines, that's entirely up to administrators. You can just scp them there, or wget them. You could even package them as RPMs and then deploy them with dnf if you feel adventurous.

  7. Portable service images can reside in any directory you like. However, if you place them in /var/lib/portables/ then portablectl will find them easily and can show you a list of images you can attach and suchlike.

  8. Attaching a portable service image can be done persistently, so that it remains attached on subsequent boots (which is the default), or it can be attached only until the next reboot, by passing --runtime to portablectl.

  9. Because portable service images are ultimately just regular OS images, it's natural and easy to build a single image that can be used in three different ways:

    1. It can be attached to any host as a portable service image.

    2. It can be booted as OS container, for example in a container manager like systemd-nspawn.

    3. It can be booted as host system, for example on bare metal or in a VM manager.

    Of course, to qualify for the latter two the image needs to contain more than just the service binaries, the os-release file and the unit files. To be bootable an OS container manager such as systemd-nspawn the image needs to contain an init system of some form, for example systemd. To be bootable on bare metal or as VM it also needs a boot loader of some form, for example systemd-boot.

Profiles

In the previous section the "profile" concept was briefly mentioned. Since they are a major feature of the Portable Services concept, they deserve some focus. A "profile" is ultimately just a pre-defined drop-in file for unit files that are attached to a host. They are supposed to mostly contain sand-boxing and security settings, but may actually contain any other settings, too. When a portable service is attached a suitable profile has to be selected. If none is selected explicitly, the default profile called default is used. systemd ships with four different profiles out of the box:

  1. The default profile provides a medium level of security. It contains settings to drop capabilities, enforce system call filters, restrict many kernel interfaces and mount various file systems read-only.

  2. The strict profile is similar to the default profile, but generally uses the most restrictive sand-boxing settings. For example networking is turned off and access to AF_NETLINK sockets is prohibited.

  3. The trusted profile is the least strict of them all. In fact it makes almost no restrictions at all. A service run with this profile has basically full access to the host system.

  4. The nonetwork profile is mostly identical to default, but also turns off network access.

Note that the profile is selected at the time the portable service image is attached, and it applies to all service files attached, in case multiple are shipped in the same image. Thus, the sand-boxing restriction to enforce are selected by the administrator attaching the image and not the image vendor.

Additional profiles can be defined easily by the administrator, if needed. We might also add additional profiles sooner or later to be shipped with systemd out of the box.

What's the use-case for this? If I have containers, why should I bother?

Portable Services are primarily intended to cover use-cases where code should more feel like "extensions" to the host system rather than live in disconnected, separate worlds. The profile concept is supposed to be tunable to the exact right amount of integration or isolation needed for an application.

In the container world the concept of "super-privileged containers" has been touted a lot, i.e. containers that run with full privileges. It's precisely that use-case that portable services are intended for: extensions to the host OS, that default to isolation, but can optionally get as much access to the host as needed, and can naturally take benefit of the full functionality of the host. The concept should hence be useful for all kinds of low-level system software that isn't shipped with the OS itself but needs varying degrees of integration with it. Besides servers and appliances this should be particularly interesting for IoT and embedded devices.

Because portable services are just a relatively small extension to the way system services are otherwise managed, they can be treated like regular service for almost all use-cases: they will appear along regular services in all tools that can introspect systemd unit data, and can be managed the same way when it comes to logging, resource management, runtime life-cycles and so on.

Portable services are a very generic concept. While the original use-case is OS extensions, it's of course entirely up to you and other users to use them in a suitable way of your choice.

Walkthrough

Let's have a look how this all can be used. We'll start with building a portable service image from scratch, before we attach, enable and start it on a host.

Building a Portable Service image

As mentioned, you can use any tool you like that can create OS trees or raw images for building Portable Service images, for example debootstrap or dnf --installroot=. For this example walkthrough run we'll use mkosi, which is ultimately just a fancy wrapper around dnf and debootstrap but makes a number of things particularly easy when repetitively building images from source trees.

I have pushed everything necessary to reproduce this walkthrough locally to a GitHub repository. Let's check it out:

$ git clone https://github.com/systemd/portable-walkthrough.git

Let's have a look in the repository:

  1. First of all, walkthroughd.c is the main source file of our little service. To keep things simple it's written in C, but it could be in any language of your choice. The daemon as implemented won't do much: it just starts up and waits for SIGTERM, at which point it will shut down. It's ultimately useless, but hopefully illustrates how this all fits together. The C code has no dependencies besides libc.

  2. walkthroughd.service is a systemd unit file that starts our little daemon. It's a simple service, hence the unit file is trivial.

  3. Makefile is a short make build script to build the daemon binary. It's pretty trivial, too: it just takes the C file and builds a binary from it. It can also install the daemon. It places the binary in /usr/local/lib/walkthroughd/walkthroughd (why not in /usr/local/bin? because it's not a user-facing binary but a system service binary), and its unit file in /usr/local/lib/systemd/walkthroughd.service. If you want to test the daemon on the host we can now simply run make and then ./walkthroughd in order to check everything works.

  4. mkosi.default is file that tells mkosi how to build the image. We opt for a Fedora-based image here (but we might as well have used Debian here, or any other supported distribution). We need no particular packages during runtime (after all we only depend on libc), but during the build phase we need gcc and make, hence these are the only packages we list in BuildPackages=.

  5. mkosi.build is a shell script that is invoked during mkosi's build logic. All it does is invoke make and make install to build and install our little daemon, and afterwards it extends the distribution-supplied /etc/os-release file with an additional field that describes our portable service a bit.

Let's now use this to build the portable service image. For that we use the mkosi tool. It's sufficient to invoke it without parameter to build the first image: it will automatically discover mkosi.default and mkosi.build which tells it what to do. (Note that if you work on a project like this for a longer time, mkosi -if is probably the better command to use, as it that speeds up building substantially by using an incremental build mode). mkosi will download the necessary RPMs, and put them all together. It will build our little daemon inside the image and after all that's done it will output the resulting image: walkthroughd_1.raw.

Because we opted to build a GPT raw disk image in mkosi.default this file is actually a raw disk image containing a GPT partition table. You can use fdisk -l walkthroughd_1.raw to enumerate the partition table. You can also use systemd-nspawn -i walkthroughd_1.raw to explore the image quickly if you need.

Using the Portable Service Image

Now that we have a portable service image, let's see how we can attach, enable and start the service included within it.

First, let's attach the image:

# /usr/lib/systemd/portablectl attach ./walkthroughd_1.raw
(Matching unit files with prefix 'walkthroughd'.)
Created directory /etc/systemd/system/walkthroughd.service.d.
Written /etc/systemd/system/walkthroughd.service.d/20-portable.conf.
Created symlink /etc/systemd/system/walkthroughd.service.d/10-profile.conf → /usr/lib/systemd/portable/profile/default/service.conf.
Copied /etc/systemd/system/walkthroughd.service.
Created symlink /etc/portables/walkthroughd_1.raw → /home/lennart/projects/portable-walkthrough/walkthroughd_1.raw.

The command will show you exactly what is has been doing: it just copied the main service file out, and added the two drop-ins, as expected.

Let's see if the unit is now available on the host, just like a regular unit, as promised:

# systemctl status walkthroughd.service
● walkthroughd.service - A simple example service
   Loaded: loaded (/etc/systemd/system/walkthroughd.service; disabled; vendor preset: disabled)
  Drop-In: /etc/systemd/system/walkthroughd.service.d
           └─10-profile.conf, 20-portable.conf
   Active: inactive (dead)

Nice, it worked. We see that the unit file is available and that systemd correctly discovered the two drop-ins. The unit is neither enabled nor started however. Yes, attaching a portable service image doesn't imply enabling nor starting. It just means the unit files contained in the image are made available to the host. It's up to the administrator to then enable them (so that they are automatically started when needed, for example at boot), and/or start them (in case they shall run right-away).

Let's now enable and start the service in one step:

# systemctl enable --now walkthroughd.service
Created symlink /etc/systemd/system/multi-user.target.wants/walkthroughd.service → /etc/systemd/system/walkthroughd.service.

Let's check if it's running:

# systemctl status walkthroughd.service
● walkthroughd.service - A simple example service
   Loaded: loaded (/etc/systemd/system/walkthroughd.service; enabled; vendor preset: disabled)
  Drop-In: /etc/systemd/system/walkthroughd.service.d
           └─10-profile.conf, 20-portable.conf
   Active: active (running) since Wed 2018-06-27 17:55:30 CEST; 4s ago
 Main PID: 45003 (walkthroughd)
    Tasks: 1 (limit: 4915)
   Memory: 4.3M
   CGroup: /system.slice/walkthroughd.service
           └─45003 /usr/local/lib/walkthroughd/walkthroughd

Jun 27 17:55:30 sigma walkthroughd[45003]: Initializing.

Perfect! We can see that the service is now enabled and running. The daemon is running as PID 45003.

Now that we verified that all is good, let's stop, disable and detach the service again:

# systemctl disable --now walkthroughd.service
Removed /etc/systemd/system/multi-user.target.wants/walkthroughd.service.
# /usr/lib/systemd/portablectl detach ./walkthroughd_1.raw
Removed /etc/systemd/system/walkthroughd.service.
Removed /etc/systemd/system/walkthroughd.service.d/10-profile.conf.
Removed /etc/systemd/system/walkthroughd.service.d/20-portable.conf.
Removed /etc/systemd/system/walkthroughd.service.d.
Removed /etc/portables/walkthroughd_1.raw.

And finally, let's see that it's really gone:

# systemctl status walkthroughd
Unit walkthroughd.service could not be found.

Perfect! It worked!

I hope the above gets you started with Portable Services. If you have further questions, please contact our mailing list.

Further Reading

A more low-level document explaining details is shipped along with systemd.

There are also relevant manual pages: portablectl(1) and systemd-portabled(8).

For further information about mkosi see its homepage.

Let's block ads! (Why?)

Read the whole story
pbouwdewijn
16 days ago
reply
Sjon
16 days ago
very nice
Share this story
Delete

You should be sleeping more than eight hours a night

1 Share

For something that we spend a third of our lives doing (if we’re lucky), sleep is something that we know relatively little about. “Sleep is actually a relatively recent discovery,” says Daniel Gartenberg, a sleep scientist who is currently an assistant adjunct professor in biobehavioral health at Penn State. “Scientists only started looking at sleep 70 years ago.”

As anyone who has lay awake at night contemplating the complexities of the universe can attest, sleep is a slippery beast. It involves a complex web of biological and neurological processes, all of which can be thrown off by something as simple as a partner’s nasal trumpeting or a coffee too late in the day.

There are also many, many misconceptions about sleep: that you can “catch up” on the weekend for lost hours of shuteye. That you can get by on four hours’ sleep a night. That a nip of whiskey before bed helps you sleep better. Even that eating cheese before snoozing causes nightmares.

To set the record straight about being horizontal, Quartz spoke to one of the world’s most-talked-about sleep scientists. Daniel Gartenberg is currently working on research funded by the National Science Foundation and the National Institute of Aging and is also a TED resident. (Watch his talk on deep sleep here.) He’s also an entrepreneur who has launched several cognitive-behavioral-therapy apps, including the Sonic Sleep Coach alarm clock. All that with 8.5 hours’ of sleep a night.

Some topics we cover:

  • why 8.5 hours of sleep is the new eight hours
  • the genes that dictate if you’re a morning person or a night owl
  • why you should take a nap instead of meditating
  • how sleep deprivation can be a tool to fight depression
  • why sleep should be the new worker’s rights
  • and tips on how to get a better night’s rest (hint: it’s not your Fitbit)

You can also read Gartenberg’s comments on “sleep inertia”—the scientific reason why you feel so groggy when you wake up—here.

This interview has been lightly condensed and edited for clarity.

Quartz: Why do we need sleep?

Daniel Gartenberg: Every organism on the planet sleeps in some fashion, to some degree—even the basic fruit fly. What makes sleep so essential for our wellbeing comes down to three main things: to save our energy, to help our cells recover, and to help us process and understand our environment.

This third one is what I study. The “synaptic homeostasis hypothesis” is this idea that during the day, we make all these connections with the world around us. It used to be like, “Don’t go over there—the lions live there now.” Now it’s like, “What did Barbara say to me in the office?” These excitatory connections we make during the day result in the neurons in our brains getting overall higher activation. Then during the nighttime when we sleep, we have a downregulating process where the things that didn’t really matter to your survival sink to the bottom, and the things that are most relevant to your survival rise to the top. What deep sleep does is all the neural processing, and what REM sleep [rapid-eye-movement sleep] and light sleep do is basically integrate that into your long-term personality and understanding of the world.

What other differences are there between deep sleep and REM sleep?

A lot of people don’t understand that these are two very, very different processes. A lot of people probably learned from basic psych in high school that you have these sleep stages: light sleep > deep sleep > light sleep > REM, and repeat. As you sleep more, you get less and less deep sleep, and also if you sleep-deprive yourself, you get more deep sleep.

During deep sleep, you get these long-burst brainwaves that are called delta waves, but during REM, your brainwaves are actually functioning very similarly to waking life. Your body is also paralyzed during REM—it’s a very noticeable physiological difference. You also lose thermo-regulation, meaning if it’s hot in your environment, your body gets hot, kind of like you’re a chameleon.

Your whole thing is that deep sleep is more important than REM sleep. Why?

It’s an ongoing debate in the literature—really, it’s both. Deep sleep is really important, but REM sleep is also important. We know that the human growth hormone, cell-recovery things, and the ability to process new information are associated with deep sleep. REM sleep is basically the processing of information.

Asking for the workaholics in the room: Do we really need that much sleep?

A professor I collaborate with at Penn State named Orfeu Buxton says that 8.5 hours of sleep is the new eight hours. In order to get a healthy eight hours of sleep, which is the amount that many people need, you need to be in bed for 8.5 hours. The standard in the literature is that healthy sleepers spend more than 90% of the time in bed asleep, so if you’re in bed for eight hours, a healthy sleeper might actually sleep for only about 7.2 hours.

 8.5 hours of sleep is the new eight hours. That being said, some people are short sleepers: You can do a test to find out if you have genetic makeup that makes you a short sleeper. That’s rare, though, so by and large, people are not getting enough sleep. Getting half an hour less than what you need to really adds up over a week period.

To see how much sleep you really need, my professor suggests that when you go on vacation, try to stick to your normal bedtime and then see what time you wake up. With no stressors or time to get up, you’ll just fall into a natural pattern, and that’s probably how much sleep you actually need.

I normally get around six to seven hours of sleep a night and feel fine. But is that just because how I feel has become my normal operating mode, and I could really be functioning at a higher level?

Right. That’s like the fish and the fish bowl phenomenon: The fish doesn’t know that he’s in the fishbowl, nonetheless that he’s in water. Also, when you’re sleep deprived, research has shown that you’re really bad at being able to tell that you’re sleep deprived.

A lot of this has to do with stress in our environment and our external need to work all the time. This is what’s driving the fact that we’re sleeping so poorly nowadays.

How else does the workplace affect sleep?

I think of sleep like the new worker’s rights: We’re being worked to the point that we’re not sleeping, and it’s having physical detriments on our health and wellbeing.

People should be able to sleep like they’re able to get healthcare. This also means making our work environments more conducive to sleep. For optimum productivity, we need around eight hours of sleep, right? But that doesn’t have to be in one go. Maybe I’ll get a little less than that during the night, and then I’ll take a 20-to-30-minute power nap at midday. There’s a siesta for a reason! New Yorkers oftentimes try to pound through with coffee and whatever, but giving in to your natural circadian rhythm during that afternoon lull might be a good thing. We weren’t made to produce for eight hours straight.

Let’s talk more about circadian rhythms. What are they, and why are they responsible for that mid-afternoon slump?

We evolved from bacteria in the ocean that could differentiate sunlight from darkness—that’s what ended up forming the human eye. That means every organism is responsive to a circadian rhythm that’s largely dictated by sunlight. The photo receptors in our eyes pick up on sunlight, which controls the release of melatonin and all these other neurotransmitters that dictate your energy levels throughout the day.

You have a peak moment of awakeness during the morning. After lunch you usually have a glucose spike, especially if you have a big heavy lunch, like a cheeseburger. That glucose spike combined with a circadian dip gives you a period of fatigue between around 2 and 4pm. You’ll then have another spike in alertness right before dinner, and then you’ll start getting tired again closer to bedtime. That’s your 24-hour circadian rhythm, basically.

Then there’s also something called “chronobiology.” You actually have genes that dictate whether you’re a morning person or an evening person.

Wait—what? Really?

Yeah! If you’re a morning person, they call it a lark. If you’re a night person, they call it a night owl. Your genes give you a greater proclivity to being a lark or an owl. And then some people have genes that make them very flexible. The environmental cues they react against are called zeitgebers.

Lightsabers?

Zeitgebers! It’s this weird German word. There’s a lot of cool words in sleep: like the photo receptors control the release of melatonin by sending signals to the suprachiasmatic nucleus, just like supercalifragilisticexpialidocious.

 You actually have genes that dictate whether you’re a morning person or an evening person. Anyway, basically your biggest zeitgeber is sunlight, and that’s the environmental cue that controls energy levels as well. But then also timing of meals, exercise, and having a consistent bedtime are all zeitgebers that impact your circadian rhythm. A bigger part of the problem is that we’re indoors so much now, so we don’t get that natural occurring sunlight when you wake up in the morning. That’s one of the best things that you can do to entrench your circadian rhythm.

If your circadian rhythm is off, it negatively impacts your sleep quality. So having that consistent rhythm of going to bed and getting up at the same time will actually make your sleep more regenerative at night. Going for a walk outside and getting that sunlight in the morning is the best thing to do to wake up. Your circadian rhythm isn’t a fixed thing: It’s actually shiftable based on your environmental cues.

If you wake up in the middle of the night (say, to go to the bathroom) but get back to sleep quickly, does that screw around with your sleep quality?

It varies. There’s no clear answer. In our studies, we’ll play really loud sounds that people have no conscious awareness of at all: We can play a sound literally at 70 decibels, which is like someone screaming, and that’ll wake them up briefly and then they’ll go right back into the sleep stage that they were in. Other times you can get a full awakening, and you’ll have to go through the process again.

It’s actually pretty normal to wake up during the night, anyway. In The Canterbury Tales, one of the oldest manuscripts in English culture, they describe “second sleep.” There’s some evidence that we used to go to bed when the sun went down, then wake up for a little bit at night—putter around, make sure we’re not getting eaten by a lion—and then go back to sleep. So it’s pretty normal to like wake up in the middle of the night and use the bathroom or whatever.

How is society changing our relationship with sleep? What will be the consequences of this?

Gallup has reported that over the past 50 years, we’re sleeping a whole hour less per night than we did in the 1950s. That’s a lot. A lot of that has to do with having TV on all the time, and mobile phones are taking it to the next level. But I think the biggest issue right now is the lack of work/life balance. I mean, I’m an entrepreneur, so I feel like I’m basically always “on”. A lot of people have jobs where they’re getting emails all hours the night, and there’s no longer a nine-to-five schedule.

I think that’s why meditation is so in vogue right now. But I actually think sleep is a more regenerative process than meditation. A lot of times people talk about doing meditation around midday, but for most people I would recommend a quick power nap instead of a quick meditation.

But if I try to take a powernap at lunch and can’t get to sleep, haven’t I just wasted 20 minutes of my day that I could have been meditating—or working?

Even when you close your eyes and turn off your brain for little bit—even if you don’t fully fall asleep—your brain creeps into theta waves. Similarly, when you meditate, you get a little bit of theta. So if you’re one of these people who really has a hard time with napping, maybe meditation could be better.

 Taking a break—whether it’s meditation or nap—during the circadian dip can be much more conducive to productivity. The most important thing is taking that time off—it’s more conducive to your productivity. A lot of times people think they can like fight through and push harder and harder and harder to get better results, but sleep can give you that, too. When you transition in and out of sleep, your brain produces theta waves, which help you think more divergently. That’s why a lot of times when you wake up from a power nap or from sleeping, you’ll be able to solve that intractable problem that you couldn’t earlier in the day. That’s one of the reasons I think taking a break—whether it’s meditation or nap—during that circadian dip can be much more conducive to productivity.

This is especially true for creative jobs. Jobs used to be very manual, but as jobs are becoming more and more cognitive, I think caring for your cognition is going to become increasingly important for the work.

What are some tips for getting a better sleep?

You want a cold, quiet environment with no light: That’s basically the ideal way to improve your sleep quality. However, people have a different ideal sound, light, and temperature environment to improve their sleep quality. We need stimulus control: You want to save the bedroom for sleep and sex.

SOUND: We focus on sound a lot. Quiet environments are going to improve your sleep quality. Your brain has these micro arousals throughout the night without you being consciously aware of it—even an air-conditioning unit turning on wakes up your brain. So blocking out noises is a low-hanging fruit to improve your sleep quality. Bose just released an earbud that you can sleep with, for example.

There’s this new finding where playing sounds at a certain frequency when your brain is in deep sleep actually increases the percentage of time spent in deep sleep. We’re publishing this paper in Society for Neuroscience Conference in a couple of weeks, and it’s basically what my TED talk is about. Playing these pulses at the same frequency as your deep-sleep brainwaves primes more deep sleep. Scientifically speaking, it’s a similar process as transcranial direct-current stimulation, except it doesn’t use electricity—just sound. Sound gets transmitted into electricity because you’re picking up on the auditory cortex while you’re sleeping.

TEMPERATURE: This is a big problem, especially if you have a sleep partner. Everyone has different natural body temperatures, and usually men run hotter than women, but it can go either way. That can be a big issue if you have a different body temperature, because then no one’s happy. I wrote this article called “Split blankets, not beds,” where I said that you shouldn’t share the same comforter. Of course it’s nice to share, and I do that at some points, but it’s also important to have different bedding on your bed so you can have that lighter sheet or comforter to try to mitigate differences in body temperature. There’s also something called a chili pad. You put on half of your bed and it’ll dictate the temperature level on your half if you run at a different temperature than your sleep partner.

LIGHT: The other thing is no blue light close to bedtime. There are a lot of studies that screen time close to bed is bad. One of the ideal ways of using our app is to connect it to your Bluetooth speakers so that you can put your phone in another room: There is something important to not having your phone in reach, because then you’re looking at the screen and getting the brightness. If you live in the city and there’s bright lights at night, having blackout shade can also be super useful.

STRESS: When you’re stressed, your flight-or-fight response is active during the night, and your sleep quality is going to be shallow. It’s natural: If you have kids, you are programmed to be able to respond to your environment during the night to make sure you’re not getting eaten by a predator. Parents have this issue when their fight-or-flight response system is overly activated by worrying about their kid, and that worry actually makes their sleep quality worse.

One of the things I recommend to people who have a racing mind and worrying thoughts about work is to segment a time to get it out during the day—encapsulate it in a little mental box so you’re not laying down in bed and just having your mind race about all these things.

How do you feel about sleep trackers and wearables?

Probably the most common wearable to measuring sleep right now is the Fitbit. I’ve studied these devices in depth in a well-controlled laboratory experiment where we’re monitoring brainwaves. I can say the Fitbit is pretty accurate in measuring when you’re asleep and when you’re wake, but when it comes to measuring sleep stages, basically any device that measures heart rate, like the Apple Watch, is totally inaccurate. That’s because they don’t sample at the frequency necessary to get a good read on your sleep stages.

Fitbits can also cause bigger problems, because they stress you out about the fact you think you’re not getting enough deep sleep—even though they’re not good at accurately measuring sleep stages.

What about people who mess with their sleep cycle and try things like the da Vinci method, where you take a 20-minute nap every four hours?

That polyphasic sleep stuff? I mean, it’s just not enough sleep. It’s ridiculous.

I haven’t seen a study that empirically shows that it’s helpful. There is certainly a false myth that we need eight hours of continuous sleep: I think it’s possible to have your sleep be a little bit broken up and be perfectly healthy—but getting that eight hours is crucially important. The thing is that the placebo effect in some of these polyphasic sleep methods runs really high.

There have also been some studies showing that sleep deprivation could be a tool to combat persistent depression. How do you feel about that?

That was really interesting. If you have an extreme case of depression, sometimes some therapists will sleep deprive you a little bit. It’s basically to activate your fight-or-flight response and jolt you out of your depression. But things like empathy and working with others are also impacted when you’re sleep deprived, and you’re also more sensitive to pain. Some people are studying this link to address the opioid epidemic and through actually sleeping better: Chronic pain might be associated with deep sleep.


Read next: “Sleep inertia” explains why you feel so groggy when you wake up

This article is part of Quartz Ideas, our home for bold arguments and big thinkers.

Let's block ads! (Why?)

Read the whole story
pbouwdewijn
35 days ago
reply
Share this story
Delete

(1992) Torvalds and Tannenbaum on Linux's Design

1 Share
LINUX is obsolete ast 29.01.92 04:12

I was in the U.S. for a couple of weeks, so I haven't commented much on
LINUX (not that I would have said much had I been around), but for what
it is worth, I have a couple of comments now.

As most of you know, for me MINIX is a hobby, something that I do in the
evening when I get bored writing books and there are no major wars,
revolutions, or senate hearings being televised live on CNN.  My real
job is a professor and researcher in the area of operating systems.

As a result of my occupation, I think I know a bit about where operating
are going in the next decade or so.  Two aspects stand out:

1. MICROKERNEL VS MONOLITHIC SYSTEM
   Most older operating systems are monolithic, that is, the whole operating
   system is a single a.out file that runs in 'kernel mode.'  This binary
   contains the process management, memory management, file system and the
   rest. Examples of such systems are UNIX, MS-DOS, VMS, MVS, OS/360,
   MULTICS, and many more.

   The alternative is a microkernel-based system, in which most of the OS
   runs as separate processes, mostly outside the kernel.  They communicate
   by message passing.  The kernel's job is to handle the message passing,
   interrupt handling, low-level process management, and possibly the I/O.
   Examples of this design are the RC4000, Amoeba, Chorus, Mach, and the
   not-yet-released Windows/NT.

   While I could go into a long story here about the relative merits of the
   two designs, suffice it to say that among the people who actually design
   operating systems, the debate is essentially over.  Microkernels have won.
   The only real argument for monolithic systems was performance, and there
   is now enough evidence showing that microkernel systems can be just as
   fast as monolithic systems (e.g., Rick Rashid has published papers comparing
   Mach 3.0 to monolithic systems) that it is now all over but the shoutin`.

   MINIX is a microkernel-based system.  The file system and memory management
   are separate processes, running outside the kernel.  The I/O drivers are
   also separate processes (in the kernel, but only because the brain-dead
   nature of the Intel CPUs makes that difficult to do otherwise).  LINUX is
   a monolithic style system.  This is a giant step back into the 1970s.
   That is like taking an existing, working C program and rewriting it in
   BASIC.  To me, writing a monolithic system in 1991 is a truly poor idea.


2. PORTABILITY
   Once upon a time there was the 4004 CPU.  When it grew up it became an
   8008.  Then it underwent plastic surgery and became the 8080.  It begat
   the 8086, which begat the 8088, which begat the 80286, which begat the
   80386, which begat the 80486, and so on unto the N-th generation.  In
   the meantime, RISC chips happened, and some of them are running at over
   100 MIPS.  Speeds of 200 MIPS and more are likely in the coming years.
   These things are not going to suddenly vanish.  What is going to happen
   is that they will gradually take over from the 80x86 line.  They will
   run old MS-DOS programs by interpreting the 80386 in software.  (I even
   wrote my own IBM PC simulator in C, which you can get by FTP from
   ftp.cs.vu.nl =  192.31.231.42 in dir minix/simulator.)  I think it is a
   gross error to design an OS for any specific architecture, since that is
   not going to be around all that long.

   MINIX was designed to be reasonably portable, and has been ported from the
   Intel line to the 680x0 (Atari, Amiga, Macintosh), SPARC, and NS32016.
   LINUX is tied fairly closely to the 80x86.  Not the way to go.

Don`t get me wrong, I am not unhappy with LINUX.  It will get all the people
who want to turn MINIX in BSD UNIX off my back.  But in all honesty, I would
suggest that people who want a **MODERN** "free" OS look around for a
microkernel-based, portable OS, like maybe GNU or something like that.


Andy Tanenbaum (a...@cs.vu.nl)


P.S. Just as a random aside, Amoeba has a UNIX emulator (running in user
space), but it is far from complete.  If there are any people who would
like to work on that, please let me know.  To run Amoeba you need a few 386s,
one of which needs 16M, and all of which need the WD Ethernet card.

LINUX is obsolete David Megginson 29.01.92 06:12

I would like to at least look at LINUX, but I cannot, since I run
a 68000-based machine. In any case, it is nice having the kernel
independent, since patches like the multi-threaded FS patch don't
have to exist in a different version for each CPU.

I second everything AST said, except that I would like to see
the kernel _more_ independent from everything else. Why does the
Intel architecture _not_ allow drivers to be independent programs?

I also don't like the fact that the kernel, mm and fs share the
same configuration files. Since they _are_ independent, they should
have more of a sense of independence.


David

#################################################################
David Megginson                  meg...@epas.utoronto.ca
Centre for Medieval Studies      da...@doe.utoronto.ca
University of Toronto            39 Queen's Park Cr. E.
#################################################################

LINUX is obsolete ast 29.01.92 10:03
In article <1992Jan29.1...@epas.toronto.edu> meg...@epas.utoronto.ca (David Megginson) writes:
>
>Why does the
>Intel architecture _not_ allow drivers to be independent programs?

The drivers have to read and write the device registers in I/O space, and
this cannot be done in user mode on the 286 and 386. If it were possible
to do I/O in a protected way in user space, all the I/O tasks could have
been user programs, like FS and MM.

Andy Tanenbaum (a...@cs.vu.nl)

LINUX is obsolete Kevin Brown 29.01.92 17:36

Of course, there are some things that are best left to the kernel, be it
micro or monolithic.  Like things that require playing with the process'
stack, e.g. signal handling.  Like memory allocation.  Things like that.

The microkernel design is probably a win, all in all, over a monolithic
design, but it depends on what you put in the kernel and what you leave
out.

>   MINIX is a microkernel-based system.  The file system and memory management
>   are separate processes, running outside the kernel.  The I/O drivers are
>   also separate processes (in the kernel, but only because the brain-dead
>   nature of the Intel CPUs makes that difficult to do otherwise).  

Minix is a microkernel design, of sorts.  The problem is that it gives special
priveleges to mm and fs, when there shouldn't be any (at least for fs).  It
also fails to integrate most of the functionality of mm in the kernel itself,
and this makes things like signal handling and memory allocation *really*
ugly.  If you did these things in the kernel itself, then signal handling
would be as simple as setting a virtual interrupt vector and causing the
signalled process to receive that interrupt (with the complication that
system calls might have to be terminated.  Which means that a message would
have to be sent to every process that is servicing the process' system call,
if any.  It's considerations like these that make the monolithic kernel
design appealing).

The *entire* system call interface in Minix needs to be rethought.  As it
stands right now, the file system is not just a file system, it's also a
system-call server.  That functionality needs to be separated out in order
to facilitate a multiple file system architecture.  Message passing is
probably the right way to go about making the call and waiting for it, but
the message should go to a system call server, not the file system itself.

In order to handle all the special caveats of the Unix API, you end up writing
a monolithic "kernel" even if you're using a microkernel base.  You end up
with something called a "server", and an example is the BSD server that runs
under Mach.

And, in any case, the message-passing in Minix needs to be completely redone.
As it is, it's a kludge.  I've been giving this some thought, but I haven't
had time to do anything with what I've thought of so far.  Suffice it to say
that the proper way to do message-passing is probably with message ports
(both public and private), with the various visible parts of the operating
system having public message ports.  Chances are, that ends up being the
system call server only, though this will, of course, depend on the goals
of the design.

>   LINUX is
>   a monolithic style system.  This is a giant step back into the 1970s.
>   That is like taking an existing, working C program and rewriting it in
>   BASIC.  To me, writing a monolithic system in 1991 is a truly poor idea.

Depends on the design criteria, as you should know.  If your goal is to
design a Unix workalike that is relatively simple and relatively small,
then a monolithic design is probably the right approach for the job, because
unless you're designing for really backwards hardware, the problems of
things like interrupted system calls, memory allocation within the kernel
(so you don't have to statically allocate *everything* in your OS), signal
handling, etc. all go away (or are at least minimized) if you use a
monolithic design.  If you want the ability to bring up and take down
file systems, add and remove device drivers, etc., all at runtime, then
a microkernel approach is the right solution.

Frankly, I happen to like the idea of removable device drivers and such,
so I tend to favor the microkernel approach as a general rule.

>2. PORTABILITY
>   Once upon a time there was the 4004 CPU.  When it grew up it became an
>   8008.  Then it underwent plastic surgery and became the 8080.  It begat
>   the 8086, which begat the 8088, which begat the 80286, which begat the
>   80386, which begat the 80486, and so on unto the N-th generation.  In
>   the meantime, RISC chips happened, and some of them are running at over
>   100 MIPS.  Speeds of 200 MIPS and more are likely in the coming years.
>   These things are not going to suddenly vanish.  What is going to happen
>   is that they will gradually take over from the 80x86 line.  They will
>   run old MS-DOS programs by interpreting the 80386 in software.  (I even
>   wrote my own IBM PC simulator in C, which you can get by FTP from
>   ftp.cs.vu.nl =  192.31.231.42 in dir minix/simulator.)  I think it is a
>   gross error to design an OS for any specific architecture, since that is
>   not going to be around all that long.

Again, look at the design criteria.  If portability isn't an issue, then
why worry about it?  While LINUX suffers from lack of portability, portability
was obviously never much of a consideration for its author, who explicitly
stated that it was written as an exercise in learning about the 386
architecture.

And, in any case, while MINIX is portable in the sense that most of the code
can be ported to other platforms, it *still* suffers from the limitations of
the original target machine that drove the walk down the design decision tree.
The message passing is a kludge because the 8088 is slow.  The kernel doesn't
do memory allocation (thus not allowing FS and the drivers to get away with
using a malloc library or some such, and thus causing everyone to have to
statically allocate everything), probably due to some other limitation of
the 8088.  The very idea of using "clicks" is obviously the result of the
segmented architecture of the 8088.  The file system size is too limited
(theoretically fixed in 1.6, but now you have *two* file system formats to
contend with.  If having the file system as a separate process is such a
big win, then why don't we have two file system servers, eh?  Why simply
extend the existing Minix file system instead of implementing BSD's FFS
or some other high-performance file system?  It's not that I'm greedy
or anything... :-).

>   MINIX was designed to be reasonably portable, and has been ported from the
>   Intel line to the 680x0 (Atari, Amiga, Macintosh), SPARC, and NS32016.
>   LINUX is tied fairly closely to the 80x86.  Not the way to go.

All in all, I tend to agree.

>Don`t get me wrong, I am not unhappy with LINUX.  It will get all the people
>who want to turn MINIX in BSD UNIX off my back.  But in all honesty, I would
>suggest that people who want a **MODERN** "free" OS look around for a
>microkernel-based, portable OS, like maybe GNU or something like that.

Yeah, right.  Point me someplace where I can get a free "modern" OS and I'll
gladly investigate.  But the GNU OS is currently vaporware, and as far as I'm
concerned it will be for a LOOOOONG time to come.

Any other players?  BSD 4.4 is a monolithic architecture, so by your
definition it's out.  Mach is free, but the BSD server isn't (AT&T code,
you know), and in any case, isn't the BSD server something you'd consider
to be a monolithic design???

Really.  Why do you think LINUX is as popular as it is?  The answer is
simple, of course: because it's the *only* free Unix workalike OS in
existence.  BSD doesn't qualify (yet).  Minix doesn't qualify.  XINU
isn't even in the running.  GNU's OS is vaporware, and probably will
be for a long time, so *by definition* it's not in the running.  Any
other players?  I haven't heard of any...

>Andy Tanenbaum (a...@cs.vu.nl)

Minix is an excellent piece of work.  A good starting point for anyone who
wants to learn about operating systems.  But it needs rewriting to make it
truly elegant and functional.  As it is, there are too many kludges and
hacks (e.g., the message passing).

                                Kevin Brown

LINUX is obsolete Linus Benedict Torvalds 29.01.92 15:14
Well, with a subject like this, I'm afraid I'll have to reply.
Apologies to minix-users who have heard enough about linux anyway.  I'd
like to be able to just "ignore the bait", but ...  Time for some
serious flamefesting!
In article <12...@star.cs.vu.nl> a...@cs.vu.nl (Andy Tanenbaum) writes:
>
>I was in the U.S. for a couple of weeks, so I haven't commented much on
>LINUX (not that I would have said much had I been around), but for what
>it is worth, I have a couple of comments now.
>
>As most of you know, for me MINIX is a hobby, something that I do in the
>evening when I get bored writing books and there are no major wars,
>revolutions, or senate hearings being televised live on CNN.  My real
>job is a professor and researcher in the area of operating systems.

You use this as an excuse for the limitations of minix? Sorry, but you
loose: I've got more excuses than you have, and linux still beats the
pants of minix in almost all areas.  Not to mention the fact that most
of the good code for PC minix seems to have been written by Bruce Evans.

Re 1: you doing minix as a hobby - look at who makes money off minix,
and who gives linux out for free.  Then talk about hobbies.  Make minix
freely available, and one of my biggest gripes with it will disappear.
Linux has very much been a hobby (but a serious one: the best type) for
me: I get no money for it, and it's not even part of any of my studies
in the university.  I've done it all on my own time, and on my own
machine.

Re 2: your job is being a professor and researcher: That's one hell of a
good excuse for some of the brain-damages of minix. I can only hope (and
assume) that Amoeba doesn't suck like minix does.

>1. MICROKERNEL VS MONOLITHIC SYSTEM

True, linux is monolithic, and I agree that microkernels are nicer. With
a less argumentative subject, I'd probably have agreed with most of what
you said. From a theoretical (and aesthetical) standpoint linux looses.
If the GNU kernel had been ready last spring, I'd not have bothered to
even start my project: the fact is that it wasn't and still isn't. Linux
wins heavily on points of being available now.

>   MINIX is a microkernel-based system. [deleted, but not so that you
> miss the point ]  LINUX is a monolithic style system.

If this was the only criterion for the "goodness" of a kernel, you'd be
right.  What you don't mention is that minix doesn't do the micro-kernel
thing very well, and has problems with real multitasking (in the
kernel).  If I had made an OS that had problems with a multithreading
filesystem, I wouldn't be so fast to condemn others: in fact, I'd do my
damndest to make others forget about the fiasco.

[ yes, I know there are multithreading hacks for minix, but they are
hacks, and bruce evans tells me there are lots of race conditions ]

>2. PORTABILITY

"Portability is for people who cannot write new programs"
                -me, right now (with tongue in cheek)

The fact is that linux is more portable than minix.  What? I hear you
say.  It's true - but not in the sense that ast means: I made linux as
conformant to standards as I knew how (without having any POSIX standard
in front of me).  Porting things to linux is generally /much/ easier
than porting them to minix.

I agree that portability is a good thing: but only where it actually has
some meaning.  There is no idea in trying to make an operating system
overly portable: adhering to a portable API is good enough.  The very
/idea/ of an operating system is to use the hardware features, and hide
them behind a layer of high-level calls.  That is exactly what linux
does: it just uses a bigger subset of the 386 features than other
kernels seem to do.  Of course this makes the kernel proper unportable,
but it also makes for a /much/ simpler design.  An acceptable trade-off,
and one that made linux possible in the first place.

I also agree that linux takes the non-portability to an extreme: I got
my 386 last January, and linux was partly a project to teach me about
it.  Many things should have been done more portably if it would have
been a real project.  I'm not making overly many excuses about it
though: it was a design decision, and last april when I started the
thing, I didn't think anybody would actually want to use it.  I'm happy
to report I was wrong, and as my source is freely available, anybody is
free to try to port it, even though it won't be easy.

                Linus

PS. I apologise for sometimes sounding too harsh: minix is nice enough
if you have nothing else. Amoeba might be nice if you have 5-10 spare
386's lying around, but I certainly don't. I don't usually get into
flames, but I'm touchy when it comes to linux :)

LINUX is obsolete Louie 29.01.92 18:55
In <12...@star.cs.vu.nl> a...@cs.vu.nl (Andy Tanenbaum) writes:
>But in all honesty, I would
>suggest that people who want a **MODERN** "free" OS look around for a
>microkernel-based, portable OS, like maybe GNU or something like that.

There are really no other alternatives other than Linux for people like
me who want a "free" OS.  Considering that the majority of people who
would use a "free" OS use the 386, portability is really not all that
big of a concern.  If I had a Sparc I would use Solaris.  

As it stands, I installed Linux with gcc, emacs 18.57, kermit and all of the
GNU utilities without any trouble at all.  No need to apply patches. I
just followed the installation instructions.  I can't get an OS like
this *anywhere* for the price to do my Computer Science homework. And
it seems like network support and then X-Windows will be ported to Linux
well before Minix.  This is something that would be really useful. In my
opinion, portability of standard Unix software is important also.

I know that the design using a monolithic system is not as good as the
microkernel.  But for the short term future (And I know I won't/can't
be uprading from my 386), Linux suits me perfectly.

Philip Wu
p...@unixg.ubc.ca

LINUX is obsolete Jim Burns 29.01.92 19:39
in article <12...@star.cs.vu.nl>, a...@cs.vu.nl (Andy Tanenbaum) says:
> The drivers have to read and write the device registers in I/O space, and
> this cannot be done in user mode on the 286 and 386. If it were possible
> to do I/O in a protected way in user space, all the I/O tasks could have
> been user programs, like FS and MM.

The standard way of doing that is to trap on i/o space protection
violations, and emulate the i/o for the user.
--
BURNS,JIM (returned student)
Georgia Institute of Technology, 30178 Georgia Tech Station,
Atlanta Georgia, 30332            | Internet: gt0...@prism.gatech.edu
uucp:          ...!{decvax,hplabs,ncar,purdue,rutgers}!gatech!prism!gt0178a

LINUX is obsolete ast 30.01.92 05:44
In article <1992Jan29.2...@klaava.Helsinki.FI> torv...@klaava.Helsinki.FI (Linus Benedict Torvalds) writes:
>You use this [being a professor] as an excuse for the limitations of minix?
The limitations of MINIX relate at least partly to my being a professor:
An explicit design goal was to make it run on cheap hardware so students
could afford it.  In particular, for years it ran on a regular 4.77 MHZ PC
with no hard disk.  You could do everything here including modify and recompile
the system.  Just for the record, as of about 1 year ago, there were two
versions, one for the PC (360K diskettes) and one for the 286/386 (1.2M).
The PC version was outselling the 286/386 version by 2 to 1.  I don't have
figures, but my guess is that the fraction of the 60 million existing PCs that
are 386/486 machines as opposed to 8088/286/680x0 etc is small.  Among students
it is even smaller. Making software free, but only for folks with enough money
to buy first class hardware is an interesting concept.
Of course 5 years from now that will be different, but 5 years from now
everyone will be running free GNU on their 200 MIPS, 64M SPARCstation-5.
>Re 2: your job is being a professor and researcher: That's one hell of a
>good excuse for some of the brain-damages of minix. I can only hope (and
>assume) that Amoeba doesn't suck like minix does.
Amoeba was not designed to run on an 8088 with no hard disk.

>If this was the only criterion for the "goodness" of a kernel, you'd be
>right.  What you don't mention is that minix doesn't do the micro-kernel
>thing very well, and has problems with real multitasking (in the
>kernel).  If I had made an OS that had problems with a multithreading
>filesystem, I wouldn't be so fast to condemn others: in fact, I'd do my
>damndest to make others forget about the fiasco.
A multithreaded file system is only a performance hack.  When there is only
one job active, the normal case on a small PC, it buys you nothing and adds
complexity to the code.  On machines fast enough to support multiple users,
you probably have enough buffer cache to insure a hit cache hit rate, in
which case multithreading also buys you nothing.  It is only a win when there
are multiple processes actually doing real disk I/O.  Whether it is worth
making the system more complicated for this case is at least debatable.

I still maintain the point that designing a monolithic kernel in 1991 is
a fundamental error.  Be thankful you are not my student.  You would not
get a high grade for such a design :-)


>The fact is that linux is more portable than minix.  What? I hear you
>say.  It's true - but not in the sense that ast means: I made linux as
>conformant to standards as I knew how (without having any POSIX standard
>in front of me).  Porting things to linux is generally /much/ easier
>than porting them to minix.
MINIX was designed before POSIX, and is now being (slowly) POSIXized as
everyone who follows this newsgroup knows.  Everyone agrees that user-level
standards are a good idea.  As an aside, I congratulate you for being able
to write a POSIX-conformant system without having the POSIX standard in front
of you. I find it difficult enough after studying the standard at great length.

My point is that writing a new operating system that is closely tied to any
particular piece of hardware, especially a weird one like the Intel line,
is basically wrong.  An OS itself should be easily portable to new hardware
platforms.  When OS/360 was written in assembler for the IBM 360
25 years ago, they probably could be excused.  When MS-DOS was written
specifically for the 8088 ten years ago, this was less than brilliant, as
IBM and Microsoft now only too painfully realize. Writing a new OS only for the
386 in 1991 gets you your second 'F' for this term.  But if you do real well
on the final exam, you can still pass the course.


Prof. Andrew S. Tanenbaum (a...@cs.vu.nl)

LINUX is obsolete David Feustel 30.01.92 10:57
a...@cs.vu.nl (Andy Tanenbaum) writes:

>I still maintain the point that designing a monolithic kernel in 1991 is
>a fundamental error.  Be thankful you are not my student.  You would not
>get a high grade for such a design :-)

That's ok. Einstein got lousy grades in math and physics.
--
David Feustel N9MYI, 1930 Curdes Ave, Fort Wayne, IN 46805. (219)482-9631
feu...@netcom.com
=== NBC News: GE's Advertising And Public Relations Agency ===

LINUX is obsolete David Megginson 30.01.92 11:58
In article <1992Jan30.185728.26477feustel@netcom.COM> feustel@netcom.COM (David Feustel) writes:
>a...@cs.vu.nl (Andy Tanenbaum) writes:
>
>
>>I still maintain the point that designing a monolithic kernel in 1991 is
>>a fundamental error.  Be thankful you are not my student.  You would not
>>get a high grade for such a design :-)
>
>That's ok. Einstein got lousy grades in math and physics.

And Dan Quayle got low grades in political science. I think that there
are more Dan Quayles than Einsteins out there... ;-)


David

#################################################################
David Megginson                  meg...@epas.utoronto.ca
Centre for Medieval Studies      da...@doe.utoronto.ca
University of Toronto            39 Queen's Park Cr. E.
#################################################################

LINUX is obsolete Randy Burns 30.01.92 12:33
In article <12...@star.cs.vu.nl> a...@cs.vu.nl (Andy Tanenbaum) writes:
>In article <1992Jan29.2...@klaava.Helsinki.FI> torv...@klaava.Helsinki.FI (Linus Benedict Torvalds) writes:
>Of course 5 years from now that will be different, but 5 years from now
>everyone will be running free GNU on their 200 MIPS, 64M SPARCstation-5.
Well, I for one would _love_ to see this happen.
>>The fact is that linux is more portable than minix.  What? I hear you
>>say.  It's true - but not in the sense that ast means: I made linux as
>>conformant to standards as I knew how (without having any POSIX standard
>>in front of me).  Porting things to linux is generally /much/ easier
>>than porting them to minix.
........

>My point is that writing a new operating system that is closely tied to any
>particular piece of hardware, especially a weird one like the Intel line,
>is basically wrong.
First off, the parts of Linux tuned most finely to the 80x86 are the Kernel
and the devices. My own sense is that even if Linux is simply a stopgap
measure to let us all run GNU software, it is still worthwhile to have a
a finely tuned kernel for the most numerous architecture presently in
existance.
> An OS itself should be easily portable to new hardware
>platforms.
Well, the only part of Linux that isn't portable is the kernel and drivers.
Compare to the compilers, utilities, windowing system etc. this is really
a small part of the effort. Since Linux has a large degree of call
compatibility with portable OS's I wouldn't complain. I'm personally
very grateful to have an OS that makes it more likely that some of us will
be able to take advantage of the software that has come out of Berkeley,
FSF, CMU etc. It may well be that in 2-3 years when ultra cheap BSD
variants and Hurd proliferate, that Linux will be obsolete. Still, right
now Linux greatly reduces the cost of using tools like gcc, bison, bash
which are useful in the development of  such an OS.
Apologies (was Re: LINUX is obsolete) Linus Benedict Torvalds 30.01.92 07:38
In article <1992Jan29.2...@klaava.Helsinki.FI> I wrote:
>Well, with a subject like this, I'm afraid I'll have to reply.

And reply I did, with complete abandon, and no thought for good taste
and netiquette.  Apologies to ast, and thanks to John Nall for a friendy
"that's not how it's done"-letter.  I over-reacted, and am now composing
a (much less acerbic) personal letter to ast.  Hope nobody was turned
away from linux due to it being (a) possibly obsolete (I still think
that's not the case, although some of the criticisms are valid) and (b)
written by a hothead :-)

                Linus "my first, and hopefully last flamefest" Torvalds

LINUX is obsolete David Feustel 30.01.92 15:15
meg...@epas.utoronto.ca (David Megginson) writes:
>In article <1992Jan30.185728.26477feustel@netcom.COM> feustel@netcom.COM (David Feustel) writes:
>>a...@cs.vu.nl (Andy Tanenbaum) writes:
>>
>>
>>>I still maintain the point that designing a monolithic kernel in 1991 is
>>>a fundamental error.  Be thankful you are not my student.  You would not
>>>get a high grade for such a design :-)
>>
>>That's ok. Einstein got lousy grades in math and physics.
>And Dan Quayle got low grades in political science. I think that there
>are more Dan Quayles than Einsteins out there... ;-)

But the Existence of Linux suggests that we may have more of an
Einstein than a Quail here.


--
David Feustel N9MYI, 1930 Curdes Ave, Fort Wayne, IN 46805. (219)482-9631
feu...@netcom.com
=== NBC News: GE's Advertising And Public Relations Agency ===
posixiation (was Re: LINUX is obsolete) Geoff Collyer 30.01.92 17:13
Andy Tanenbaum:

>MINIX was designed before POSIX, and is now being (slowly) POSIXized as
>everyone who follows this newsgroup knows.

May I recommend the use of the verb "posixiate" (by analogy with
asphyxiate) instead of "posixize"?  Similarly, I prefer "ansitise"
(converse and anagram of "sanitise") to "ansify".
--
Geoff Collyer                world.std.com!geoff, uunet.uu.net!geoff

LINUX is obsolete Kevin Brown 30.01.92 23:43
Sorry, but I just can't resist this thread...:-)
In article <12...@star.cs.vu.nl> a...@cs.vu.nl (Andy Tanenbaum) writes:
>In article <1992Jan29.2...@klaava.Helsinki.FI> torv...@klaava.Helsinki.FI (Linus Benedict Torvalds) writes:
>>You use this [being a professor] as an excuse for the limitations of minix?
>The limitations of MINIX relate at least partly to my being a professor:
>An explicit design goal was to make it run on cheap hardware so students
>could afford it.  In particular, for years it ran on a regular 4.77 MHZ PC
>with no hard disk.  

And an explicit design goal of Linux was to take advantage of the special
features of the 386 architecture.  So what exactly is your point?  Different
design goals get you different designs.  You ought to know that.

>You could do everything here including modify and recompile
>the system.  Just for the record, as of about 1 year ago, there were two
>versions, one for the PC (360K diskettes) and one for the 286/386 (1.2M).
>The PC version was outselling the 286/386 version by 2 to 1.  I don't have
>figures, but my guess is that the fraction of the 60 million existing PCs that
>are 386/486 machines as opposed to 8088/286/680x0 etc is small.  Among students
>it is even smaller.

I find it very interesting that you claim here that Minix was designed
primarily for cheap hardware (in particular, the IBM PC/XT with no hard
disk) and yet elsewhere have also mentioned the virtues of being portable
across hardware platforms.  Well, if you insist on designing the thing
with the lowest common denominator as your basis, that's fine, but of
course the end result will be less than pretty unless designed *very*
carefully.

>Making software free, but only for folks with enough money
>to buy first class hardware is an interesting concept.

Except that Linux was designed more for the purposes of the designer than
anything else.  If I were writing an OS, I'd design it to suit myself, too.
It's just that Linus was nice enough to share his code with the rest of us.

>Of course 5 years from now that will be different, but 5 years from now
>everyone will be running free GNU on their 200 MIPS, 64M SPARCstation-5.

Maybe.  But by then, the 386/486 will probably be where the PC is now:
everyone will have one and they'll be dirt cheap.  The timing will be
about right.  In which case Linux will fit right in, wouldn't you say?

>>Re 2: your job is being a professor and researcher: That's one hell of a
>>good excuse for some of the brain-damages of minix. I can only hope (and
>>assume) that Amoeba doesn't suck like minix does.
>Amoeba was not designed to run on an 8088 with no hard disk.

Here's a question for you: as a general rule, when you go to design an
operating system, do you design it for specific capabilities and then run
it on whatever hardware will do the job, or do you design it with the
hardware as a target and fit the capabilities to the hardware?  With respect
to Minix, it seems you did the latter, but I don't know whether or not you
did that with Amoeba.

>>If this was the only criterion for the "goodness" of a kernel, you'd be
>>right.  What you don't mention is that minix doesn't do the micro-kernel
>>thing very well, and has problems with real multitasking (in the
>>kernel).  If I had made an OS that had problems with a multithreading
>>filesystem, I wouldn't be so fast to condemn others: in fact, I'd do my
>>damndest to make others forget about the fiasco.
>A multithreaded file system is only a performance hack.  

Bull.  A multithreaded file system has a completely different design than
a single-threaded file system and has different design criteria than a
single-threaded file system.

>When there is only one job active, the normal case on a small PC, it buys
>you nothing and adds complexity to the code.  

If there is only going to be one job active anyway then *why bother with
multitasking at all*????

If you're going to implement multitasking, then don't do a halfway job
of it.  On the other hand, if you're going to assume that there will be
only one job active anyway, then don't bother with multitasking (after
all, it *does* complicate things :-).

>On machines fast enough to
>support multiple users, you probably have enough buffer cache to insure a
>hit cache hit rate, in which case multithreading also buys you nothing.  

Maybe.  Multiple users means multiple things being done simultaneously.  I
wouldn't bet on the buffer cache buying you so much that multithreading
makes no difference.  It's one thing if the users are doing something
simple, like editing a file.  It's another thing if they're compiling,
reading news, or other things that touch lots of different files.

>It is only a win when there are multiple processes actually doing real disk
>I/O.  

Which happens a *lot* when you're running multiple users.  Or when you're
a machine hooked up to the net and handling news traffic.

>Whether it is worth making the system more complicated for this case is
>at least debatable.

Oh, come on.  How tough is it to implement a multi-threaded file system?
All you need is a decent *buffered* (preferably infinitely so)
message-passing system and a way to save your current state when you send
out a request to the device driver(s) to perform some work (and obviously
some way to restore that state).  Minix has the second via the setjmp()/
longjmp() mechanism, but lacks the former in a serious way.

>I still maintain the point that designing a monolithic kernel in 1991 is
>a fundamental error.  

Not if you're trying to implement the system call semantics of Unix in a
reasonably simple and elegant way.

>Be thankful you are not my student.  You would not
>get a high grade for such a design :-)

Why not?  What's this big thing against monolithic kernels?  There are
certain classes of problems for which a monolithic kernel is a more
appropriate design than a microkernel architecture.  I think implementing
Unix semantics with a minimum of fuss is one such problem.

Unless you can suggest an elegant way to terminate a system call upon
receipt of a signal from within a microkernel OS?

>>The fact is that linux is more portable than minix.  What? I hear you
>>say.  It's true - but not in the sense that ast means: I made linux as
>>conformant to standards as I knew how (without having any POSIX standard
>>in front of me).  Porting things to linux is generally /much/ easier
>>than porting them to minix.
>MINIX was designed before POSIX, and is now being (slowly) POSIXized as
>everyone who follows this newsgroup knows.  Everyone agrees that user-level
>standards are a good idea.  As an aside, I congratulate you for being able
>to write a POSIX-conformant system without having the POSIX standard in front
>of you. I find it difficult enough after studying the standard at great length.
>
>My point is that writing a new operating system that is closely tied to any
>particular piece of hardware, especially a weird one like the Intel line,
>is basically wrong.  

Weird as the Intel line may be, it's *the* most popular line, by several
times.  So it's not like it's *that* big a loss.  And Intel hardware is
at least relatively cheap to come by, regardless of what your students
might tell you (why do you think they all own PCs?)...

>An OS itself should be easily portable to new hardware
>platforms.  

As long as you don't sacrifice too much in the way of performance or
architectural elegance in order to gain this.  Unfortunately, that's
*exactly* what happened with Minix: in attempting to implement it on
hardware of the lowest caliber, you ended up having to make design
decisions with respect to the architecture and implementation that have
made vintage Minix unusable as anything more than a personal toy operating
system.  For example: why didn't you implement a system call server as
a layer between the file system and user programs?  My guess: you didn't
have enough memory on the target machine to do it.

Put another way: you hit your original goal right on target, and are to
be applauded for that.  But in doing so, you missed a lot of other
targets that wouldn't have been hard to hit as well, with some
consideration of them.  I think.  But I wasn't there when you were making
the decisions, so it's real hard for me to say for sure.  I'm speaking
from hindsight, but you had the tough problem of figuring out what to do
without such benefit.

Now, *modified* Minix is usable.  Add a bigger buffer cache.  Modify it
so that it can take advantage of 386 protected mode.  Fix the tty driver
so that it will give you multiple consoles.  Fix the rs232 driver to deal
with DCD/DTR and do the right thing when carrier goes away.  Fix the pipes
so that read and write requests don't fail just because they happen to be
bigger than the size of a physical pipe.  Add shared text segments so you
maximize the use of your RAM.  Fix the scheduler so that it deals with
character I/O bound processes in a reasonable way.

>When OS/360 was written in assembler for the IBM 360
>25 years ago, they probably could be excused.  When MS-DOS was written
>specifically for the 8088 ten years ago, this was less than brilliant, as
>IBM and Microsoft now only too painfully realize.

Yeah, right.  Just what hardware do you think they'd like to port DOS to,
anyway?  I can't think of any.  I don't think IBM or Microsoft are
regretting *that* particular aspect of DOS.  Rather, they're probably
regretting the fact that it was written for the address space provided
by the 8088.

MS-DOS isn't less than brilliant because it was written for one machine
architecture.  It's less than brilliant because it doesn't do anything
well, *regardless* of its portability or lack thereof.


>Writing a new OS only for the
>386 in 1991 gets you your second 'F' for this term.  But if you do real well
>on the final exam, you can still pass the course.

He made his code freely redistributable.  *You* didn't even do that.  Just
for that move alone, he scores points in my book.  Of course, the
distribution technology available to him is much better than what was
available when you did Minix, so it's hard to fault you for that...

But I must admit, Minix is still one hell of a bargain, and I would never
hesitate to recommend it to anyone who wants to learn something about Unix
and operating systems in general.  As a working operating system (i.e.,
one intended for a multi-user environment), however, I'd hesitate to
recommend it, except that there really aren't any good alternatives
(except Linux, of course, at least tentatively.  I can't say for sure,
since I haven't checked out Linux yet), since it doesn't have the performance
capabilities that a working operating system needs.

>Prof. Andrew S. Tanenbaum (a...@cs.vu.nl)


                                Kevin Brown

LINUX is obsolete Linus Benedict Torvalds 31.01.92 02:33
In article <12...@star.cs.vu.nl> a...@cs.vu.nl (Andy Tanenbaum) writes:
>The limitations of MINIX relate at least partly to my being a professor:
>An explicit design goal was to make it run on cheap hardware so students
>could afford it.

All right: a real technical point, and one that made some of my comments
inexcusable.  But at the same time you shoot yourself in the foot a bit:
now you admit that some of the errors of minix were that it was too
portable: including machines that weren't really designed to run unix.
That assumption lead to the fact that minix now cannot easily be
extended to have things like paging, even for machines that would
support it.  Yes, minix is portable, but you can rewrite that as
"doesn't use any features", and still be right.

>A multithreaded file system is only a performance hack.

Not true.  It's a performance hack /on a microkernel/, but it's an
automatic feature when you write a monolithic kernel - one area where
microkernels don't work too well (as I pointed out in my personal mail
to ast).  When writing a unix the "obsolete" way, you automatically get
a multithreaded kernel: every process does it's own job, and you don't
have to make ugly things like message queues to make it work
efficiently.

Besides, there are people who would consider "only a performance hack"
vital: unless you have a cray-3, I'd guess everybody gets tired of
waiting on the computer all the time. I know I did with minix (and yes,
I do with linux too, but it's /much/ better).

>I still maintain the point that designing a monolithic kernel in 1991 is
>a fundamental error.  Be thankful you are not my student.  You would not
>get a high grade for such a design :-)

Well, I probably won't get too good grades even without you: I had an
argument (completely unrelated - not even pertaining to OS's) with the
person here at the university that teaches OS design.  I wonder when
I'll learn :)

>My point is that writing a new operating system that is closely tied to any
>particular piece of hardware, especially a weird one like the Intel line,
>is basically wrong.

But /my/ point is that the operating system /isn't/ tied to any
processor line: UNIX runs on most real processors in existence.  Yes,
the /implementation/ is hardware-specific, but there's a HUGE
difference.  You mention OS/360 and MS-DOG as examples of bad designs
as they were hardware-dependent, and I agree.  But there's a big
difference between these and linux: linux API is portable (not due to my
clever design, but due to the fact that I decided to go for a fairly-
well-thought-out and tested OS: unix.)

If you write programs for linux today, you shouldn't have too many
surprises when you just recompile them for Hurd in the 21st century.  As
has been noted (not only by me), the linux kernel is a miniscule part of
a complete system: Full sources for linux currently runs to about 200kB
compressed - full sources to a somewhat complete developement system is
at least 10MB compressed (and easily much, much more). And all of that
source is portable, except for this tiny kernel that you can (provably:
I did it) re-write totally from scratch in less than a year without
having /any/ prior knowledge.

In fact the /whole/ linux kernel is much smaller than the 386-dependent
things in mach: i386.tar.Z for the current version of mach is well over
800kB compressed (823391 bytes according to nic.funet.fi).  Admittedly,
mach is "somewhat" bigger and has more features, but that should still
tell you something.

                Linus

LINUX is obsolete -Pete French. 31.01.92 01:49
in article <1992Jan30....@epas.toronto.edu>, meg...@epas.utoronto.ca (David Megginson) says:
> Nntp-Posting-Host: epas.utoronto.ca

>
> In article <1992Jan30.185728.26477feustel@netcom.COM> feustel@netcom.COM (David Feustel) writes:
>>
>>That's ok. Einstein got lousy grades in math and physics.
>
> And Dan Quayle got low grades in political science. I think that there
> are more Dan Quayles than Einsteins out there... ;-)

What a horrible thought !

But on the points about microkernel v monolithic, isnt this partly an
artifact of the language being used ? MINIX may well be designed as a
microkernel system, but in the end you still end up with a large
monolithic chunk of binary data that gets loaded in as "the OS". Isnt it
written as separate programs simply because C does not support the idea
of multiple processes within a single piece of monolithic code. Is there
any real difference between a microkernel written as several pieces of C
and a monolithic kernel written in something like OCCAM ? I would have
thought that in this case the monolithic design would be a better one
than the micorkernel style since with the advantage of inbuilt
language concurrency the kernel could be made even more modular than the
MINIX one is.

Anyone for MINOX :-)

-bat.
--
-Pete French. (the -bat. )         /
Adaptive Systems Engineering      /  

ast's comments on OS's [was Re: LINUX is obsolete] Jyrki Kuoppala 31.01.92 04:07
In article <12...@star.cs.vu.nl>, ast@cs (Andy Tanenbaum) writes:
>who want to turn MINIX in BSD UNIX off my back.  But in all honesty, I would
>suggest that people who want a **MODERN** "free" OS look around for a
>microkernel-based, portable OS, like maybe GNU or something like that.

I hear bsd 4.4 might also become free and appear in the near future
for the 386, also someone's supposed to be working on bsd 4.4 on top
of the Mach microkernel, and then there's of course GNU.  Currently of
course for many people Linux is the OS to use because it's here now,
is free and works.

>P.S. Just as a random aside, Amoeba has a UNIX emulator (running in user
>space), but it is far from complete.  If there are any people who would
>like to work on that, please let me know.  To run Amoeba you need a few 386s,
>one of which needs 16M, and all of which need the WD Ethernet card.

A note here, the sources I've seen seem to imply that Amoeba will not
be free as in you won't be able to use it, copy it, enhance it, share
it etc. without paying $$ and/or asking permission from someone.

//Jyrki

Apologies (was Re: LINUX is obsolete) Ari Lemmke 31.01.92 15:38

In article <1992Jan30....@klaava.Helsinki.FI> torv...@klaava.Helsinki.FI (Linus Benedict Torvalds) writes:
   In article <1992Jan29.2...@klaava.Helsinki.FI> I wrote:
:  :Well, with a subject like this, I'm afraid I'll have to reply.
:   And reply I did, with complete abandon, and no thought for good taste
:   and netiquette.  Apologies to ast, and thanks to John Nall for a friendy
:   "that's not how it's done"-letter.  I over-reacted, and am now composing

        I didn't and still don't see anything wrong to FOLLOUP, if
        I'm getting *bashed* on the net. Linus' article was clear
        and not against 'good taste' (what ever that is).

        Linus doesn't have anything to apologise, not even on
        comp.os.minix.

:                   Linus "my first, and hopefully last flamefest" Torvalds

        arl                // has nothing to do what I'm thinking about
                        // Minix or Linux.

LINUX is obsolete Douglas Graham 31.01.92 16:26
In article <12...@star.cs.vu.nl> a...@cs.vu.nl (Andy Tanenbaum) writes:
>   While I could go into a long story here about the relative merits of the
>   two designs, suffice it to say that among the people who actually design
>   operating systems, the debate is essentially over.  Microkernels have won.

Can you recommend any (unbiased) literature that points out the strengths
and weaknesses of the two approaches?  I'm sure that there is something
to be said for the microkernel approach, but I wonder how closely
Minix resembles the other systems that use it.  Sure, Minix uses lots
of tasks and messages, but there must be more to a microkernel architecture
than that.  I suspect that the Minix code is not split optimally into tasks.

>   The only real argument for monolithic systems was performance, and there
>   is now enough evidence showing that microkernel systems can be just as
>   fast as monolithic systems (e.g., Rick Rashid has published papers comparing
>   Mach 3.0 to monolithic systems) that it is now all over but the shoutin`.

My main complaint with Minix is not it's performance.  It is that adding
features is a royal pain -- something that I presume a microkernel
architecure is supposed to alleviate.

>   MINIX is a microkernel-based system.

Is there a consensus on this?

>   LINUX is
>   a monolithic style system.  This is a giant step back into the 1970s.
>   That is like taking an existing, working C program and rewriting it in
>   BASIC.  To me, writing a monolithic system in 1991 is a truly poor idea.

This is a fine assertion, but I've yet to see any rationale for it.
Linux is only about 12000 lines of code I think.  I don't see how
splitting that into tasks and blasting messages around would improve it.

>Don`t get me wrong, I am not unhappy with LINUX.  It will get all the people
>who want to turn MINIX in BSD UNIX off my back.  But in all honesty, I would
>suggest that people who want a **MODERN** "free" OS look around for a
>microkernel-based, portable OS, like maybe GNU or something like that.

Well, there are no other choices that I'm aware of at the moment.  But
when GNU OS comes out, I'll very likely jump ship again.  I sense that
you *are* somewhat unhappy about Linux (and that surprises me somewhat).
I would guess that the reason so many people embraced it, is because it
offers more features.  Your approach to people requesting features in
Minix, has generally been to tell them that they didn't really want that
feature anyway.  I submit that the exodus in the direction of Linux
proves you wrong.

Disclaimer:  I had nothing to do with Linux development.  I just find
             it an easier system to understand than Minix.
--
Doug Graham         dgr...@bnr.ca         My opinions are my own.

LINUX is obsolete Charles Hedrick 31.01.92 16:27
The history of software shows that availability wins out over
technical quality every time.  That's Linux' major advantage.  It's a
small 386-based system that's fairly compatible with generic Unix, and
is freely available.  I dropped out of the Minix community a couple of
years ago when it became clear that (1) Minix was not going to take
advantage of anything beyond the 8086 anytime in the near future, and
(2) the licensing -- while amazingly friendly -- still made it hard
for people who were interested in producing a 386 version.  Several
people apparently did nice work for the 386.  But all they could
distribute were diffs.  This made bringing up a 386 system a job that
isn't practical for a new user, and in fact I wasn't sure I wanted to
do it.  

I apologize if things have changed in the last couple of years.  If
it's now possible to get a 386 version in a form that's ready to run,
the community has developed a way to share Minix source, and bringing
up normal Unix programs has become easier in the interim, then I'm
willing to reconsider Minix.  I do like its design.

It's possible that Linux will be overtaken by Gnu or a free BSD.
However, if the Gnu OS follows the example of all other Gnu software,
it will require a system with 128MB of memory and a 1GB disk to use.
There will still be room for a small system.  My ideal OS would be 4.4
BSD.  But 4.4's release date has a history of extreme slippage.  With
most of their staff moving to BSDI, it's hard to believe that this
situation is going to be improved.  For my own personal use, the BSDI
system will probably be great.  But even their very attractive pricing
is likely to be too much for most of our students, and even though
users can get source from them, the fact that some of it is
proprietary will again mean that you can't just put altered code out
for public FTP.  At any rate, Linux exists, and the rest of these
alternatives are vapor.

LINUX is obsolete Theodore Y. Ts'o 31.01.92 13:40
>From: a...@cs.vu.nl (Andy Tanenbaum)
>ftp.cs.vu.nl =  192.31.231.42 in dir minix/simulator.)  I think it is a
>gross error to design an OS for any specific architecture, since that is
>not going to be around all that long.

It's not your fault for believing that Linux is tied to the 80386
architecture, since many Linux supporters (including Linus himself) have
made the this statement.  However, the amount of 80386-specific code is
probably not much more than what is in a Minix implementation, and there
is certainly a lot less 80386 specific code in Linux than here is
Vax-specific code in BSD 4.3.

Granted, the port to other architectures hasn't been done yet.  But if I
were going to bring up a Unix-like system on a new architecture, I'd
probably start with Linux rather than Minix, simply because I want to
have some control over what I can do with the resulting system when I'm
done with it.  Yes, I'd have to rewrite large portions of the VM and
device driver layers --- but I'd have to do that with any other OS.
Maybe it would be a little bit harder than it would to port Minix to the
new architecture; but this would probably be only true for the first
architecture that we ported Linux to.

>While I could go into a long story here about the relative merits of the
>two designs, suffice it to say that among the people who actually design
>operating systems, the debate is essentially over.  Microkernels have won.
>The only real argument for monolithic systems was performance, and there
>is now enough evidence showing that microkernel systems can be just as
>fast as monolithic systems (e.g., Rick Rashid has published papers comparing
>Mach 3.0 to monolithic systems) that it is now all over but the shoutin`.

This is not necessarily the case; I think you're painting a much more
black and white view of the universe than necessarily exists.  I refer
you to such papers as Brent Welsh's (we...@parc.xerox.com) "The
Filsystem Belongs in the Kernel" paper, where in he argues that the
filesystem is a mature enough abstraction that it should live in the
kernel, not outside of it as it would in a strict microkernel design.

There also several people who have been concerned about the speed of
OSF/1 Mach when compared with monolithic systems; in particular, the
nubmer of context switches required to handle network traffic, and
networked filesystems in particular.

I am aware of the benefits of a micro kernel approach.  However, the
fact remains that Linux is here, and GNU isn't --- and people have been
working on Hurd for a lot longer than Linus has been working on Linux.
Minix doesn't count because it's not free.  :-)  

I suspect that the balance of micro kernels versus monolithic kernels
depend on what you're doing.  If you're interested in doing research, it
is obviously much easier to rip out and replace modules in a micro
kernel, and since only researchers write papers about operating systems,
ipso facto micro kernels must be the right approach.  However, I do know
a lot of people who are not researchers, but who are rather practical
kernel programmers, who have a lot of concerns over the cost of copying
and the cost of context switches which are incurred in a micro kernel.

By the way, I don't buy your arguments that you don't need a
multi-threaded filesystem on a single user system.  Once you bring up a
windowing system, and have a compile going in one window, a news reader
in another window, and UUCP/C News going in the background, you want
good filesystem performance, even on a single-user system.  Maybe to a
theorist it's an unnecessary optimization and a (to use your words)
"performance hack", but I'm interested in a Real operating system ---
not a research toy.
=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=
Theodore Ts'o                                bloom-beacon!mit-athena!tytso
308 High St., Medford, MA 02155                ty...@athena.mit.edu
   Everybody's playing the game, but nobody's rules are the same!

LINUX is obsolete j...@jshark.rn.com 31.01.92 04:55
In article <12...@star.cs.vu.nl> a...@cs.vu.nl (Andy Tanenbaum) writes:
>In article <1992Jan29.1...@epas.toronto.edu> meg...@epas.utoronto.ca (David Megginson) writes:
>>
>>Why does the
>>Intel architecture _not_ allow drivers to be independent programs?
>
>The drivers have to read and write the device registers in I/O space, and
>this cannot be done in user mode on the 286 and 386. If it were possible
>to do I/O in a protected way in user space,

[[We must be talking about protected mode]] *THIS IS UNTRUE*

The Intel architecture supports independent tasks, each of which can be
given a "i/o privilege level". The convenient approach, used by iRMX(?), is
to "build" a load image ("root" device driver, kernel, MM and FS). Once
booted, these could be replaced by loadable tasks from disc (or network...)
and given a suitable privilege level.

The '386 additionally allows each task to have an "i/o permissions bitmap"
which specifies exactly which ports can be used.
(See "80386 Programmers Reference Manual", chapter 8)

>                                            all the I/O tasks could have
>been user programs, like FS and MM.

Do you really mean "user programs" and not "separate tasks" ??

Separate tasks, possibly privileged, I'll agree with.

User level programs may be ok for teaching operating system principles, or on
toy computers :-)  But a "production" system?  Not on my machines!

>Andy Tanenbaum (a...@cs.vu.nl)

joe.
--
j...@jshark.rn.com
uunet!nstar!jshark!joe

LINUX is obsolete j...@jshark.rn.com 31.01.92 05:21
In article <12...@star.cs.vu.nl> a...@cs.vu.nl (Andy Tanenbaum) writes:
>
>   MINIX was designed to be reasonably portable, and has been ported from the
>   Intel line to the 680x0 (Atari, Amiga, Macintosh), SPARC, and NS32016.
>   LINUX is tied fairly closely to the 80x86.  Not the way to go.

If you looked at the source instead of believing the author, you'd realise
this is not true!

He's replaced 'fubyte' by a routine which explicitly uses a segment register
- but that could be easily changed. Similarly, apart from a couple of places
which assume the '386 MMU, a couple of macros to hide the exact page sizes
etc would make porting trivial. Using '386 TSS's makes the code simpler,
but the VAX and WE32000 have similar structures.

As he's already admitted, a bit of planning would have the the system
neater, but merely putting '386 assembler around isn't a crime!

And with all due respect:
  - the Book didn't make an issue of portability (apart from a few
    "#ifdef M8088"s)
  - by the time it was released, Minix had come to depend on several
    8086 "features" that caused uproar from the 68000 users.

>Andy Tanenbaum (a...@cs.vu.nl)

joe.
--
j...@jshark.rn.com

LINUX is obsolete Will Rose 01.02.92 04:16

I've used Minix quite a bit on a PC XT, from version 1.2 onwards, and
a couple of points seem worth making.  Firstly that I ordered version
1.1 from Prentice Hall, and am devoutly thankful that they delayed my
order until 1.2 was available.  The first version of something as
complicated as an OS is only for the dedicated, and that goes for Linux
too I should think.

Secondly Minix has evolved to a reliable OS on its original PC platform,
but is still getting there on eg. the Mac; these things do take time.

Thirdly even (standard) PC 1.5 Minix won't run a lot of current Unix
software.  Partly this is a matter of the hardware being too limited,
and partly a matter of Minix being too limited in eg: the tty driver.
(And even this tty driver took a lot of sorting out in the early days).

Fourthly, I bought my XT four years ago - the motherboard was $110,
and memory (falling in price) was $7.00 per 256KB chip.  Last autumn
I bought my wife an XT to replace her CP/M word-processor - the m/b
was $50, and memory was $1.50 a chip.  This week I replaced a dead
286 board for a friend - the drop-in 16MHz 386SX was $140, and memory
was $40 for 9 x 1MB...  If I actually wanted an OS to use today, I
think I'd go with Linux; but if I wanted to learn about OS's, I think
I'd use Minix.  It looks as if they both do what they were designed
to do.

Will
c...@pnet01.cts.com

LINUX is obsolete n.h.chandler 01.02.92 17:38
I have been following the Minix/Linux discussion.  How
can I get a copy of Linux?

Neville H. Chandler
cbnewsj!n...@att.com

LINUX is obsolete Drew Eckhardt 02.02.92 04:17
In article <12...@star.cs.vu.nl> a...@cs.vu.nl (Andy Tanenbaum) writes:
>In article <1992Jan29.1...@epas.toronto.edu> meg...@epas.utoronto.ca (David Megginson) writes:
>>
>>Why does the
>>Intel architecture _not_ allow drivers to be independent programs?
>
>The drivers have to read and write the device registers in I/O space, and
>this cannot be done in user mode on the 286 and 386. If it were possible
>to do I/O in a protected way in user space, all the I/O tasks could have
>been user programs, like FS and MM.
>
>Andy Tanenbaum (a...@cs.vu.nl)

Every 386 TSS has an iopermission bitmap.  If the CPL is of a lower priveledge
level than IOPL, the io permissions bitmap is consulted, allowing protection
on a port by port basis.

 

LINUX is obsolete Allan Duncan 02.02.92 14:06
From article <1992Jan30....@menudo.uh.edu>, by ke...@nuchat.sccsi.com (Kevin Brown):
 
> The *entire* system call interface in Minix needs to be rethought.  As it
> stands right now, the file system is not just a file system, it's also a
> system-call server.  That functionality needs to be separated out in order
> to facilitate a multiple file system architecture.  Message passing is
> probably the right way to go about making the call and waiting for it, but
> the message should go to a system call server, not the file system itself.
>
> In order to handle all the special caveats of the Unix API, you end up writing
> a monolithic "kernel" even if you're using a microkernel base.  You end up
> with something called a "server", and an example is the BSD server that runs
> under Mach.
>
> And, in any case, the message-passing in Minix needs to be completely redone.
> As it is, it's a kludge.  I've been giving this some thought, but I haven't
> had time to do anything with what I've thought of so far.  Suffice it to say
> that the proper way to do message-passing is probably with message ports
> (both public and private), with the various visible parts of the operating
> system having public message ports.  Chances are, that ends up being the
> system call server only, though this will, of course, depend on the goals
> of the design.
 
It gets to sound more and more like Tripos and the Amiga :-)

Allan Duncan        ACSnet         aduncan@trl.oz
(+613) 541 6708        Internet adu...@trl.oz.au
                UUCP         {uunet,hplabs,ukc}!munnari!trl.oz.au!aduncan
Telecom Research Labs, PO Box 249, Clayton, Victoria, 3168, Australia.

LINUX is obsolete j...@jshark.rn.com 02.02.92 15:59
In article <12...@star.cs.vu.nl> a...@cs.vu.nl (Andy Tanenbaum) writes:
>
>I was in the U.S. for a couple of weeks, so I haven't commented much on
>LINUX (not that I would have said much had I been around), but for what
>it is worth, I have a couple of comments now.

Maybe keepng quiet would have been best.

>1. MICROKERNEL VS MONOLITHIC SYSTEM
>
>   While I could go into a long story here about the relative merits of the
>   two designs, suffice it to say that among the people who actually design
>   operating systems, the debate is essentially over.

No, MS-DOS won. Sad, but there you are. 60 million: Next

It would be churlish to point out that MS-DOS has loadable device drivers
and that VMS is now (basically)a set of loadable service modules and drivers.

"Microkernel" was the buzz-word of last year, so Minix is a microkernel.
"Object-oriented" is this years, so Minix is object-oriented - right?


joe.
 ----
j...@jshark.rn.com
uunet!nstar!jshark!joe

I'm a mutated .sig virus, I got this from Henry Spencer's:
"As a user, I'll take speed over features anyday" - A Tanenbaum

LINUX is obsolete Kevin Brown 02.02.92 21:12
It has been brought to my attention that my last posting was exceedingly
harsh.  Having reread it, I'm inclined to agree.

Dr. Tanenbaum claims that the microkernel architecture is the way to go.
He has a great deal more experience with operating systems than I have.
It's an understatement that it's likely that there's some substance to
his statement.  :-)

Many of the things I said in my previous posting were more a result of my
philosophical viewpoint on operating systems and programming in general
than experience.  And the particular viewpoint I hold that's relevent to
the discussion says that the method of implementation chosen depends on
the design goals, and that there is no "wrong" or "right" way to do things
that is independent of such goals.  Thus, my statement that a monolithic
kernel follows from some design goals, e.g. ease of implementation of the
semantics of the Unix API.  In particular, the ease of implementing things
like signal handling, premature system call termination, etc.  At least,
that's the conclusion I come to when I think about the problem.

My experience with Minix says that there are a number of things that should
not go in a user process, things that are better left in the kernel.  Things
like memory allocation (which requires global knowledge of the hardware,
something that a user process should, IMHO, not have) and signal handling
(which requires building stack frames).

чіSo from my point of view, the architecture of Minix is not ideal.  While
it may win in that it's a "microkernel" architecture, the division of
functionality is not entirely to my liking.  As is undoubtedly plainly
obvious by now.  :-)

Despite that, Minix is quite usable in many ways as a personal operating
system, i.e. one where there is usually only one person logged into the
system.  If I gave the impression that I thought it was unusable in general,
then I apologize for that.

However, as a *multiuser* operating system, ютi.e. an operating system designed
to efficiently meet the needs of multiple users simultaneously while also
performing batch operations, Minix is lacking, as far as I'm concerned.  
The main reason, of course, is the single-threaded file system (hereafter,
STFS).  Now, Dr. Tanenbaum may feel that a multi-threaded file system
(hereafter, MTFS) is merely a performance hack.  Perhaps he's right.
Perhaps the architecture of a MTFS is sufficiently similar to that of a
STFS that his assessment is correct.  My vision of a MTFS may differ
significantly from his, and this would explain why he and I seem to have
a difference of opinion on this matter.  Regardless of whether or not a
MTFS is a "performance hack", for a *multiuser* operating system, I think
there are a lot of good arguments that say that a MTFS is a *necessary*
"performance hack".  Provided, of course, that one does not have infinite
buffer cache resources.  :-)

There are other things I feel Minix lacks as well.  The ability to allocate
memory in the kernel is one (such an ability would allow any user process,
e.g. device drivers and the file system, to allocate memory dynamically.
This is useful for doing things like resizing the buffer cache on the fly,
etc.  The ability to pass arbitrarily sized messages, optionally via shared
memory, is another (such an ability might be limited by constraints like
page size and such).


However much Minix may be lacking from my standpoint, it is nevertheless
a very useful and welcome enhancement to my system.  In spite of the
impression that I may have given everyone in my last posting, there will
always be a soft spot in my heart for it, if only because it's the first
decent operating system I've had on my system that I've had source to.
I don't have to tell you people how incredibly useful it is to have source.
You already know.

It is very important to me to have source code to the things I run.  It
bothers me a great deal to run things that I don't have source to.  Even
the C compiler.  And the less expensive the source is, the better.  This
is why Dr. Tanenbaum's statements about Linux touched a raw nerve with me:
Linux comes with source *and* it's free.  And it's available right now.

Someone, either here on this newsgroup or over on alt.os.linux, made a
very valid observation: the cost of a 16 MHz 386SX system is about $140
more than a comparably equipped (in terms of RAM size, display technology,
hard drive space, etc.) 8088 system.  Minix is $169.  In economic terms,
Linux wins if you have to buy Minix.

Where Minix wins (or is at least even :-) is when you can get it for free
via the educational distribution clause of the license agreement.  However,
Minix will run even better on a 16 MHz 386SX than on an 8088.  If I were
a student, I'd get the 386SX unless I simply didn't have a choice.  Then
I'd get whichever operating system I could get for the least cost.  If I
could get both for free, then I'd get both.  :-)


Given the reasons Linus wrote Linux, I think it's hard for anyone to fault
him for writing it the way he did.  And he was extremely nice in making
his code freely available to the rest of the world.  It's not something he
had to do.  In my book, that makes him almost beyond reproach.


Dr. Tanenbaum didn't make Minix free.  His goals were different.  Minix
is a teaching aid above all else (unless Dr. Tanenbaum has changed his
views about Minix :-).  That means that he must be concerned with the
most efficient way to get Minix to the student population.  At the time
Minix was released, Prentice-Hall was a good solution, and has been for
some time.  However, I must wonder whether or not this is still the case.
Dr. Tanenbaum: do you still feel that free distribution of Minix via the
net is not the best way to distribute Minix?


Which wins?  Minix or Linux?  Depends on how you measure them...


                                Kevin Brown

LINUX is obsolete peter da silva 03.02.92 08:22
In article <1992Jan31....@ohm.york.ac.uk> pe...@ohm.york.ac.uk (-Pete French.) writes:
> But on the points about microkernel v monolithic, isnt this partly an
> artifact of the language being used ?

I doubt it.

  [isn't MINIX]
> written as separate programs simply because C does not support the idea
> of multiple processes within a single piece of monolithic code.

C doesn't support formatted I/O either, but it can be implemented quite
effectively in C. So can concurrent processes. I've done it, in fact.
The resulting code is 90% portable (the 10% being the code that handles
the context switch).
--
-- Peter da Silva,  Ferranti International Controls Corporation
-- Sugar Land, TX  77487-5012;  +1 713 274 5180
-- "Have you hugged your wolf today?"

What good does this war do? (Re: LINUX is obsolete) peter da silva 03.02.92 08:37
Will you quit flaming each other?

I mean, linux is designed to provide a reasonably high performance environment
on a hardware platform crippled by years of backwards-compatible kludges. Minix
is designed as a teaching tool. Neither is that good at doing the other's job,
and why should they? The fact that Minix runs out of steam quickly (and it
does) isn't a problem in its chosen mileau. It's sure better than the TOY
operating system. The fact that Linux isn't transportable beyond the 386/AT
platform isn't a problem when there are millions of them out there (and quite
cheap: you can get a 386/SX for well under $1000).

A monolithic kernel is easy enough to build that it's worth doing it if it gets
a system out the door early. Think of it as a performance hack for programmer
time. The API is portable. You can replace the kernel with a microkernel
design (and MINIX isn't the be-all and end-all of microkernel designs either:
even for low end PCs... look at AmigaOS) without disturbing the applications.
That's the whole point of a portable API in the first place.

Microkernels are definitely a better design for many tasks. I takes more
work to make them efficient, so a simpler design that doesn't take advantage
of the microkernel in any real way is worth doing for pedagogical reasons.
Think of it as a performance hack for student time. The design is still good
and when you can get an API to the microkernel interface you can get VERY
impressive performance (thousands of context switches per second on an 8
MHz 68000).


--
-- Peter da Silva,  Ferranti International Controls Corporation
-- Sugar Land, TX  77487-5012;  +1 713 274 5180
-- "Have you hugged your wolf today?"
LINUX is obsolete peter da silva 03.02.92 09:40
In article <1992Feb01.0...@bmerh2.bnr.ca> dgr...@bmers30.bnr.ca (Douglas Graham) writes:
> Minix resembles the other systems that use it.  Sure, Minix uses lots
> of tasks and messages, but there must be more to a microkernel architecture
> than that.  I suspect that the Minix code is not split optimally into tasks.

Definitely. Minix shows you how a microkernel works, but it sure doesn't show
you why you would use one.

A couple of years ago I brought this up with Andy, and his response indicated
that he was himself not convinced of the superiority of the microkernel design
at the time. He said (as near as I can recall... this is a paraphrase) that a
message passing design was inherently slower than a monolithic one... which was
news to me: I had (and still have) a PC that was MUCH more responsive than any
UNIX box I ever touched using a message-passing design.

> >   MINIX is a microkernel-based system.

> Is there a consensus on this?

Yes, it's not a well-factored one, and there's no API to the microkernel
interface, but it's a microkernel design.


--
-- Peter da Silva,  Ferranti International Controls Corporation
-- Sugar Land, TX  77487-5012;  +1 713 274 5180
-- "Have you hugged your wolf today?"
LINUX is obsolete Richard Tobin 04.02.92 06:46
In article <12...@star.cs.vu.nl> a...@cs.vu.nl (Andy Tanenbaum) writes:
>A multithreaded file system is only a performance hack.  When there is only
>one job active, the normal case on a small PC, it buys you nothing

I find the single-threaded file system a serious pain when using
Minix.  I often want to do something else while reading files from the
(excruciatingly slow) floppy disk.  I rather like to play rogue while
waiting for large C or Lisp compilations.  I look to look at files in
one editor buffer while compiling in another.

(The problem would be somewhat less if the file system stuck to
serving files and didn't interact with terminal i/o.)

Of course, in basic Minix with no virtual consoles and no chance of
running emacs, this isn't much of a problem.  But to most people
that's a failure, not an advantage.  It just isn't the case that on
single-user machines there's no use for more than one active process;
the idea only has any plausibility because so many people are used to
poor machines with poor operating systems.

As to portability, Minix only wins because of its limited ambitions.
If you wanted a full-featured Unix with paging, job-control, a window
system and so on, would it be quicker to start from basic Minix and
add the features, or to start from Linux and fix the 386-specific
bits?  I don't think it's fair to criticise Linux when its aims are so
different from Minix's.  If you want a system for pedagogical use,
Minix is the answer.  But if what you want is an environment as much
like (say) a Sun as possible on your home computer, it has some
deficiencies.

-- Richard
--
Richard Tobin,
AI Applications Institute,                                R.T...@ed.ac.uk
Edinburgh University.

LINUX is obsolete Ken Thompson 03.02.92 15:07
viewpoint may be largely unrelated to its usefulness. Many if not
most of the software we use is probably obsolete according to the
latest design criteria. Most users could probably care less if the
internals of the operating system they use is obsolete. They are
rightly more interested in its performance and capabilities at the
user level.

I would generally agree that microkernels are probably the wave of
the future. However, it is in my opinion easier to implement a
monolithic kernel. It is also easier for it to turn into a mess in
a hurry as it is modified.

                                Regards,
                                        Ken

--
Ken Thompson  GTRI, Ga. Tech, Atlanta Ga. 30332 Internet:!k...@prism.gatech.edu
uucp:...!{allegra,amd,hplabs,ut-ngp}!gatech!prism!kt4
"Rowe's Rule: The odds are five to six that the light at the end of the
tunnel is the headlight of an oncoming train."       -- Paul Dickson

LINUX is obsolete Kevin Brown 04.02.92 00:08
In article <47...@hydra.gatech.EDU> k...@prism.gatech.EDU (Ken Thompson) writes:
>viewpoint may be largely unrelated to its usefulness. Many if not
>most of the software we use is probably obsolete according to the
>latest design criteria. Most users could probably care less if the
>internals of the operating system they use is obsolete. They are
>rightly more interested in its performance and capabilities at the
>user level.
>
>I would generally agree that microkernels are probably the wave of
>the future. However, it is in my opinion easier to implement a
>monolithic kernel. It is also easier for it to turn into a mess in
>a hurry as it is modified.

How difficult is it to structure the source tree of a monolithic kernel
such that most modifications don't have a large negative impact on the
source?  What sorts of pitfalls do you run into in this sort of endeavor,
and what suggestions do you have for dealing with them?

I guess what I'm asking is: how difficult is it to organize the source
such that most changes to the kernel remain localized in scope, even
though the kernel itself is monolithic?

I figure you've got years of experience with monolithic kernels :-),
so I'd think you'd have the best shot at answering questions like
these.

>Ken Thompson  GTRI, Ga. Tech, Atlanta Ga. 30332 Internet:!k...@prism.gatech.edu
>uucp:...!{allegra,amd,hplabs,ut-ngp}!gatech!prism!kt4
>"Rowe's Rule: The odds are five to six that the light at the end of the
>tunnel is the headlight of an oncoming train."       -- Paul Dickson

                                Kevin Brown

LINUX is obsolete peter da silva 03.02.92 09:32
In article <TYTSO.92J...@SOS.mit.edu> ty...@athena.mit.edu (Theodore Y. Ts'o) writes:
> This is not necessarily the case; I think you're painting a much more
> black and white view of the universe than necessarily exists.  I refer
> you to such papers as Brent Welsh's (we...@parc.xerox.com) "The
> Filsystem Belongs in the Kernel" paper, where in he argues that the
> filesystem is a mature enough abstraction that it should live in the
> kernel, not outside of it as it would in a strict microkernel design.

What does "a mature enough abstraction" mean, here? Things don't move
into the kernel simply because they're now considered safe and stable
enough, but because they're too inefficient when they're outside it or
they lose functionality by being outside it, and there's no easy fix.

The Amiga operating system certainly benefits from having a file system
outside the kernel. There are dozens of file systems, many of them written
by hobbyists, available. Ideas like "assigned paths" can be played with
in the file system without breaking stuff. All these file systems have a
common interface and so look to the application as part of the operating
system, but just because something is on the other side of the API doesn't
mean it is, or belongs, in the kernel.

> There also several people who have been concerned about the speed of
> OSF/1 Mach when compared with monolithic systems; in particular, the
> nubmer of context switches required to handle network traffic, and
> networked filesystems in particular.

If this is because the networking was moved out of the kernel, I consider
it a price well worth paying. Having networking code in the kernel is the
source of many subtle bugs in networks. Just for something that bit us,
what happens if you need to get to the upper level driver before you can
acknowledge a packet, but the process that you need to run is hung up in
the tty driver waiting for a ^Q?

Something *I* would have expected to find in the kernel before now, yet
isn't, is windowing systems. With a microkernel (and the associated lower
*cost* of a context switch) you can get much of the advantages of a kernel
window system without paying the cost in complexity.


--
-- Peter da Silva,  Ferranti International Controls Corporation
-- Sugar Land, TX  77487-5012;  +1 713 274 5180
-- "Have you hugged your wolf today?"
LINUX is obsolete Kevin Brown 04.02.92 00:28
In article <1992Feb2.2...@trl.oz.au> adu...@rhea.trl.OZ.AU (Allan Duncan) writes:
>From article <1992Jan30....@menudo.uh.edu>, by ke...@nuchat.sccsi.com (Kevin Brown):
>
>> The *entire* system call interface in Minix needs to be rethought.  As it
>> stands right now, the file system is not just a file system, it's also a
>> system-call server.  That functionality needs to be separated out in order
>> to facilitate a multiple file system architecture.  Message passing is
>> probably the right way to go about making the call and waiting for it, but
>> the message should go to a system call server, not the file system itself.
>>
>> In order to handle all the special caveats of the Unix API, you end up writing
>> a monolithic "kernel" even if you're using a microkernel base.  You end up
>> with something called a "server", and an example is the BSD server that runs
>> under Mach.
>>
>> And, in any case, the message-passing in Minix needs to be completely redone.
>> As it is, it's a kludge.  I've been giving this some thought, but I haven't
>> had time to do anything with what I've thought of so far.  Suffice it to say
>> that the proper way to do message-passing is probably with message ports
>> (both public and private), with the various visible parts of the operating
>> system having public message ports.  Chances are, that ends up being the
>> system call server only, though this will, of course, depend on the goals
>> of the design.
>
>It gets to sound more and more like Tripos and the Amiga :-)

There's no question that many of my ideas spring from the architecture
of the Amiga's operating system.  It's pretty impressive to see a
message-passing, multitasking operating system that operates as fast
as the Amiga's OS does on hardware that slow.  They did a lot of things
right.

There are some ideas that, I think, are my own.  Or, at least, that I've
developed independently.  For example, if you have a message-passing
system that includes the option to transfer message memory ownership to the
target process, then it naturally follows that you can globally optimize the
use of your block cache by making your block cache global with respect
to *all* filesystems.  The filesystem code requests blocks from the
block cache manager and tells the block cache manager what device driver
to call and what parameters to send it when flushing the block.  The block
cache manager replies with a message that is the size of a block (or, if
you wish to allocate several at a time, several blocks).  Since
ownership is transferred as a result of passing the message, the block
cache manager can allocate the memory itself, optionally flushing as
many blocks as it needs in order to free up enough to send to the caller.
The block cache manager is, of course, a user process.  If the filesystem
code is written right, you can kill the block cache manager in order to
disable the block cache.  The filesystem will simply do its thing
unbuffered.  Makes for a slow system, but at least you can do it.  You
can also change the behavior of the buffer cache by sending control
messages to the cache manager.  Can you say "tunable parameters"?  :-)

You could also accomplish this with some sort of shared memory, but this
would require semaphore control of the allocation list.  You'd also have
to figure out a way to flush bits of the cache when needed (easy to do
if you're a monolithic kernel, but I'm referring to a microkernel) without
colliding with another process writing into the block.  Semaphore control
of the individual blocks as well?

>Allan Duncan        ACSnet         aduncan@trl.oz
>(+613) 541 6708        Internet adu...@trl.oz.au
>                UUCP         {uunet,hplabs,ukc}!munnari!trl.oz.au!aduncan
>Telecom Research Labs, PO Box 249, Clayton, Victoria, 3168, Australia.

                                Kevin Brown

LINUX is obsolete Julien Maisonneuve 03.02.92 09:10
I would like to second Kevin brown in most of his remarks.
I'll add a few user points :
- When ast states that FS multithreading is useless, it reminds me of the many
times I tried to let a job run in the background (like when reading an archive on
a floppy), it is just unusable, the & shell operator could even have been left
out.
- Most interesting utilities are not even compilable under Minix because of the
ATK compiler's incredible limits. Those were hardly understandable on a basic PC,
but become absurd on a 386. Every stupid DOS compiler has a large model (more
expensive, OK). I hate the 13 bit compress !
- The lack of Virtual Memory support prevents people studying this area to
experiment, and prevents users to use large programs. The strange design of the
MM also makes it hard to modify.

The problem is that even doing exploratory work under minix is painful.
If you want to get any work done (or even fun), even DOS is becoming a better
alternative (with things like DJ GPP).
In its basic form, it is really no more than OS course example, a good
toy, but a toy. Obtaining and applying patches is a pain, and precludes further
upgrades.

Too bad when not so much is missing to make it really good.
Thanks for the work andy, but Linux didn't deserve your answer.
For the common people, it does many things better than Minix.

                                        Julien Maisonneuve.

This is not a flame, just my experience.

LINUX is obsolete Michael L. Kaufman 03.02.92 14:27
I tried to send these two posts from work, but I think they got eaten. If you
have seen them already, sorry.

-------------------------------------------------------------------------------

Andy Tanenbaum writes an interesting article (also interesting was finding out
that he actually reads this group) but I think he is missing an important
point.

He Wrote:
>As most of you know, for me MINIX is a hobby, ...

Which is also probably true of most, if not all, of the people who are involved
in Linux. We are not developing a system to take over the OS market, we are
just having a good time.

>   What is going to happen
>   is that they will gradually take over from the 80x86 line.  They will
>   run old MS-DOS programs by interpreting the 80386 in software.

Well when this happens, if I still want to play with Linux, I can just run it
on my 386 simulator.

>   MINIX was designed to be reasonably portable, and has been ported from the
>   Intel line to the 680x0 (Atari, Amiga, Macintosh), SPARC, and NS32016.
>   LINUX is tied fairly closely to the 80x86.  Not the way to go.

That's fine for the people who have those machines, but it wasn't a free
lunch. That portibility was gained at the cost of some performance and some
features on the 386. Before you decide that LINUX is not the way to go, you
should think about what it is going to be used for.  I am going to use it for
running memory and computation intensive graphics programs on my 486. For me,
speed and memory were more important then future state-of-the-artness and
portability.

>But in all honesty, I would
>suggest that people who want a **MODERN** "free" OS look around for a
>microkernel-based, portable OS, like maybe GNU or something like that.

I don't know of any free microkernel-based, portable OSes. GNU is still
vaporware, and likely to remain that way for the forseeable future. Do
you actually have one to recomend, or are you just toying with me? ;-)

------------------------------------------------------------------------------

In article <12...@star.cs.vu.nl> a...@cs.vu.nl (Andy Tanenbaum) writes:
>My point is that writing a new operating system that is closely tied to any
>particular piece of hardware, especially a weird one like the Intel line,
>is basically wrong.  An OS itself should be easily portable to new hardware
>platforms.

I think I see where I disagree with you now. You are looking at OS design
as an end in itself. Minix is good because it is portable/Micro-Kernal/etc.
Linux is not good because it is monolithic/tightly tied to Intel/etc. That
is not a strange attitude for someone in the acedemic world, but it is not
something you should expect to be universally shared. Linux is not being written
as a teaching tool, or as an abstract exercise. It is being written to allow
people to run GNU-type software _today_. The fact that it may not be in use
in five years is less important then the fact that today (well, by April
probably) I can run all sorts of software on it that I want to run. You keep
saying that Minix is better, but if it will not run the software that I want
to run, it really isn't that good (for me) at all.

>                     When OS/360 was written in assembler for the IBM 360
>25 years ago, they probably could be excused.  When MS-DOS was written
>specifically for the 8088 ten years ago, this was less than brilliant, as
>IBM and Microsoft now only too painfully realize.

Same point. MSoft did not come out with Dos to "explore the frontiers of os
research". They did it to make a buck. And considering the fact that MS-DOS
probably still outsells everyone else put together, I don't think that you
say that they have failed _in their goals_. Not that MS-Dos is the best OS
in terms of anything else, only that it has served their needs.

Michael


--
Michael Kaufman | I've seen things you people wouldn't believe. Attack ships on
 kaufman        | fire off the shoulder of Orion. I watched C-beams glitter in
  @eecs.nwu.edu | the dark near the Tannhauser gate. All those moments will be
                | lost in time - like tears in rain. Time to die.     Roy Batty

LINUX is obsolete Jonathan Allen 02.02.92 23:43
In article <12...@star.cs.vu.nl>, a...@cs.vu.nl (Andy Tanenbaum) wrote:
> In article <1992Jan29.1...@epas.toronto.edu> meg...@epas.utoronto.ca (David Megginson) writes:
>>
>>Why does the
>>Intel architecture _not_ allow drivers to be independent programs?
>
> The drivers have to read and write the device registers in I/O space, and
> this cannot be done in user mode on the 286 and 386. If it were possible
> to do I/O in a protected way in user space, all the I/O tasks could have
> been user programs, like FS and MM.

Surely this could have been done by a minute task just to read/write a
given port address in one message ?  The security could have been checked
like everything else using the process table ...

Sure it would not have been at all efficient, but would have given
the independance at a price.

Jonathan

LINUX is obsolete ast 05.02.92 06:48
In article <61...@skye.ed.ac.uk> richard@aiai.UUCP (Richard Tobin) writes:
>If you wanted a full-featured Unix with paging, job-control, a window
>system and so on, would it be quicker to start from basic Minix and
>add the features, or to start from Linux and fix the 386-specific
>bits?  

Another option that seems to be totally forgotten here is buy UNIX or a
clone.  If you just want to USE the system, instead of hacking on its
internals, you don't need source code.  Coherent is only $99, and there
are various true UNIX systems with more features for more money.  For the
true hacker, not having source code is fatal, but for people who just
want a UNIX system, there are many alternatives (albeit not free).

Andy Tanenbaum (a...@cs.vul.nl)

I/O protection Richard Tobin 05.02.92 08:18
In article <1992Feb2.1...@colorado.edu> dr...@anchor.cs.colorado.edu (Drew Eckhardt) writes:
>Every 386 TSS has an iopermission bitmap.  If the CPL is of a lower priveledge
>level than IOPL, the io permissions bitmap is consulted, allowing protection
>on a port by port basis.

I was looking into using this recently under Minix 386, and to check I
was doing the right thing, I wrote a user program to access the video
registers.  The idea was to have it fail, and then change the kernel
to make it work.  To my surprise, it worked anyway...

I'll take a closer look sometime, but does anyone (Bruce?) happen to
already know the explanation?

-- Richard

--
Richard Tobin,
AI Applications Institute,                                R.T...@ed.ac.uk
Edinburgh University.

LINUX is obsolete John W. Linville 05.02.92 09:56
In article <12...@star.cs.vu.nl> a...@cs.vu.nl (Andy Tanenbaum) writes:

Coherent is limited by a compiler that only supports the small memory model,
making it just as difficult (perhaps more in some instances) to port 'standard'
Unix programs to Coherent as it can be under Minix.  Also, Coherent is not
portable (or at least, to the best of my knowledge, has not been ported), so
this advocacy contradicts one of your arguments against Linux.

Since a true Unix system often costs as much as the machine it runs on (even
more since many Unix providers un-bundle networking and development packages),
buying a true Unix system is more than beyond the budget of many people.

John W. Linville

LINUX is obsolete Lawrence C. Foard 05.02.92 06:56
In article <12...@star.cs.vu.nl> a...@cs.vu.nl (Andy Tanenbaum) writes:
>Don`t get me wrong, I am not unhappy with LINUX.  It will get all the people
>who want to turn MINIX in BSD UNIX off my back.  But in all honesty, I would

>suggest that people who want a **MODERN** "free" OS look around for a
>microkernel-based, portable OS, like maybe GNU or something like that.

I believe you have some valid points, although I am not sure that a
microkernel is necessarily better. It might make more sense to allow some
combination of the two. As part of the IPC code I'm writting for Linux I am
going to include code that will allow device drivers and file systems to run
as user processes. These will be significantly slower though, and I believe it
would be a mistake to move everything outside the kernel (TCP/IP will be
internal).

Actually my main problem with OS theorists is that they have never tested
there ideas! None of these ideas (with a partial exception for MACH) has ever
seen the light of day. 32 bit home computers have been available for almost a
decade and Linus was the first person to ever write a working OS for them
that can be used without paying AT&T $100,000. A piece of software in hand is
worth ten pieces of vaporware, OS theorists are quick to jump all over an OS
but they are unwilling to ever provide an alternative.

The general consensus that Micro kernels is the way to go means nothing when
a real application has never even run on one.

The release of Linux is allowing me to try some ideas I've been wanting to
experment with for years, but I have never had the opportunity to work with
source code for a functioning OS.
--
Disclaimer: Opinions are based on logic rather than biblical "fact".   ------
Hackers do it for fun.  | First they came for the drug users, I said   \    /
"Profesionals" do it for money. | nothing, then they came for hackers,  \  /
Managers have others do it for them. | I said nothing... STOP W.O.D.     \/

LINUX is obsolete David Megginson 05.02.92 12:50
In article <12...@star.cs.vu.nl> a...@cs.vu.nl (Andy Tanenbaum) writes:
>Another option that seems to be totally forgotten here is buy UNIX or a
>clone.  If you just want to USE the system, instead of hacking on its
>internals, you don't need source code.  Coherent is only $99, and there
>are various true UNIX systems with more features for more money.  For the
>true hacker, not having source code is fatal, but for people who just
>want a UNIX system, there are many alternatives (albeit not free).

What Unix's _are_ available for a simple, M68000-based ST, with _or_
without source? These are the only options I know of:

1) OS 9.
2) The Beckmeyer MT C-Shell.
3) MiNT.
4) Minix.

I have used all of these except for OS 9, and Minix is clearly the
closest thing to Unix that I can run (though it is easier to port BSD
programs to MiNT using the MiNT gcc library). I could shell out CAN
$3000 for a TT, but then I may as well buy a 386 box anyway. Besides,
I _like_ having the source. The extra advantage of Minix is that the
user base is a lot wider than the ST market, so I can get decent
system enhancements from Amiga, Mac, Sparc, XT, AT, '386 and '486
users as well as from fellow ST owners.


David

#################################################################
David Megginson                  meg...@epas.utoronto.ca
Centre for Medieval Studies      da...@doe.utoronto.ca
University of Toronto            39 Queen's Park Cr. E.
#################################################################

LINUX is obsolete ast 05.02.92 15:33
In article <1992Feb5....@wpi.WPI.EDU> ent...@wintermute.WPI.EDU (Lawrence C. Foard) writes:
>Actually my main problem with OS theorists is that they have never tested
>there ideas!
I'm mortally insulted.  I AM NOT A THEORIST.  Ask anybody who was at our
department meeting yesterday (in joke).

Actually, these ideas have been very well tested in practice.  OSF is betting
its whole business on a microkernel (Mach 3.0).  USL is betting its business
on another one (Chorus).  Both of these run lots of software, and both have
been extensively compared to monolithic systems.  Amoeba has been fully
implemented and tested for a number of applications.  QNX is a microkernel
based system, and someone just told me the installed base is 200,000 systems.
Microkernels are not a pipe dream.  They represent proven technology.

The Mach guys wrote a paper called "UNIX as an application program."
It was by Golub et al., in the Summer 1990 USENIX conference.  The Chorus
people also have a technical report on microkernel performance, and I
coauthored another paper on the subject, which I mentioned yesterday
(Dec. 1991 Computing Systems).  Check them out.

Andy Tanenbaum (a...@cs.vu.nl)

LINUX is obsolete Lawrence C. Foard 06.02.92 01:22
In article <1992Feb3.0...@menudo.uh.edu> ke...@taronga.taronga.com (Kevin Brown) writes:
>Dr. Tanenbaum claims that the microkernel architecture is the way to go.
>He has a great deal more experience with operating systems than I have.
>It's an understatement that it's likely that there's some substance to
>his statement.  :-)

I tend to prefer seeing for myself rather than accepting "expert" opinion.
Microkernels are nice asthetically, but there are times when practical issues
must also be considered :)

>w3So from my point of view, the architecture of Minix is not ideal.  While


>it may win in that it's a "microkernel" architecture, the division of
>functionality is not entirely to my liking.  As is undoubtedly plainly
>obvious by now.  :-)

I've been told by people who have used both that Linux is significantly
faster. There are certainly several factors involved (certainly using 32 bits
helps alot), but the multithreading also makes for much lower overhead.

>However, as a *multiuser* operating system, ~ri.e. an operating system designed


>to efficiently meet the needs of multiple users simultaneously while also
>performing batch operations, Minix is lacking, as far as I'm concerned.  
>The main reason, of course, is the single-threaded file system (hereafter,
>STFS).  Now, Dr. Tanenbaum may feel that a multi-threaded file system
>(hereafter, MTFS) is merely a performance hack.

I think this is a very valid problem. There are two ways a single threaded FS
could work and both have substantial problems. If the FS blocks while waiting
for I/O it would be completely unusable for "real" work. Imagine several users
accessing a database, if the FS blocks for I/O they will have to wait
eventhough the data they are looking for is already in the cache. If it is
designed to be non blocking then it is even more complicated than a
multithreaded FS and will have more overhead. I hope it is atleast the second

>However much Minix may be lacking from my standpoint, it is nevertheless
>a very useful and welcome enhancement to my system.  In spite of the
>impression that I may have given everyone in my last posting, there will
>always be a soft spot in my heart for it, if only because it's the first
>decent operating system I've had on my system that I've had source to.
>I don't have to tell you people how incredibly useful it is to have source.
>You already know.

I will agree here, Minix is infinitly better than Messy-Loss :)

>Given the reasons Linus wrote Linux, I think it's hard for anyone to fault
>him for writing it the way he did.  And he was extremely nice in making
>his code freely available to the rest of the world.  It's not something he
>had to do.  In my book, that makes him almost beyond reproach.

I think more effort has been put into making practical use of Linux possible.
An educational OS is nice, but there is a world outside of colleges that
is suffering from the lack of cheap and useful OS's, I've been stuck doing
most consulting work in Messy Loss because customers don't want to fork out
$1000 for UNIX.

>Dr. Tanenbaum didn't make Minix free.  His goals were different.  Minix
>is a teaching aid above all else (unless Dr. Tanenbaum has changed his
>views about Minix :-).  That means that he must be concerned with the
>most efficient way to get Minix to the student population.  At the time
>Minix was released, Prentice-Hall was a good solution, and has been for
>some time.  However, I must wonder whether or not this is still the case.
>Dr. Tanenbaum: do you still feel that free distribution of Minix via the
>net is not the best way to distribute Minix?

I would guess that Prentice-Hall would have some objections :)


--
Disclaimer: Opinions are based on logic rather than biblical "fact".   ------
This is your friendly   | First they came for the drug users, I said   \    /
neighborhood signature virus    | nothing, then they came for hackers,  \  /
please add me to your signature! |     I said nothing... STOP W.O.D.     \/
LINUX is obsolete Timothy Murphy 06.02.92 03:14
In <1992Feb5....@wpi.WPI.EDU> ent...@wintermute.WPI.EDU (Lawrence C. Foard) writes:
>32 bit home computers have been available for almost a
>decade and Linus was the first person to ever write a working OS for them
>that can be used without paying AT&T $100,000. A piece of software in hand is
>worth ten pieces of vaporware, OS theorists are quick to jump all over an OS
>but they are unwilling to ever provide an alternative.

Surely Bruce Evans' 386-Minix preceded Linux?

(Diffs for PC-Minix -> 386-Minix
available from archive...@plains.nodak.edu
in the directory Minix/oz)

--
Timothy Murphy  
e-mail: t...@maths.tcd.ie
tel: +353-1-2842366
s-mail: School of Mathematics, Trinity College, Dublin 2, Ireland

LINUX is obsolete Tony Travis 05.02.92 18:17
a...@cs.vu.nl (Andy Tanenbaum) writes:
> Another option that seems to be totally forgotten here is buy UNIX or a
> clone.  If you just want to USE the system, instead of hacking on its
> internals, you don't need source code.  Coherent is only $99, and there
> are various true UNIX systems with more features for more money.  For the
> true hacker, not having source code is fatal, but for people who just
> want a UNIX system, there are many alternatives (albeit not free).

Andy, I have followed the development of Minix since the first messages
were posted to this group and I am now running 1.5.10 with Bruce
Evans's patches for the 386.

I 'just' want a Unix on my PC and I am not interested in hacking on its
internals, but I *do* want the source code!

An important principle underlying the success and popularity of Unix is
the philosophy of building on the work of others.

This philosophy relies upon the availability of the source code in
order that it can be examined, modified and re-used in new software.

Many years ago, I was in the happy position of being an AT&T Seventh
Edition Unix source licencee but, even then, I saw your decision to
make the source of Minix available as liberation from the shackles of
AT&T copyright!!

I think you may sometimes forget that your 'hobby' has had a profound
effect on the availability of 'personal' Unix (ie. affordable Unix) and
that the 8086 PC I ran Minix 1.2 on actually cost me considerably more
than my present 386/SX clone.

Clearly, Minix _cannot_ be all things to all men, but I see the
progress to 386 versions in much the same way that I see 68000 or other
linear address space architectures: it is a good thing for people like
me who use Minix and feel constrained by the segmented architecture of
the PC version for applications.

NOTHING you can say would convince me that I should use Coherent ...

        Tony

--
-------------------------------------------------------------------------------
 Dr. A.J.Travis <ajt@uk.ac.sari.rri>  | Rowett Research Institute,
                                      | Greenburn Road, Bucksburn, Aberdeen,
                                      | AB2 9SB. UK. tel 0224-712751

LINUX is obsolete Jerry Shekhel 06.02.92 13:28
linv...@garfield.catt.ncsu.edu (John W. Linville) writes:
>
>Since a true Unix system often costs as much as the machine it runs on (even
>more since many Unix providers un-bundle networking and development packages),
>buying a true Unix system is more than beyond the budget of many people.
>

For those who may be interested, MST sells System V Release 4.0.3 for the
386/486 for $399 including development system, $499 if you need networking.
X11R5 binaries may be obtained via FTP (networking is not required for X11R5).
I have just such a setup, and it works great.  MST's version of UNIX doesn't
have too much in the way of bug fixes relative to the AT&T code, but the
only thing I've really had problems with was a couple of bugs in csh.  Now
that I have tcsh working (built without so much as a warning!) I'll never go
back :-)

Micro Station Technology
1140 Kentwood Avenue
Cupertine, CA 95014
Tel: 408-253-3898
Fax: 408-253-7853

I am not affiliated with MST except as a customer.

>
>John W. Linville
>
--
+-------------------+----------------------+---------------------------------+
| JERRY J. SHEKHEL  | POLYGEN CORPORATION  | When I was young, I had to walk |
| Drummers do it... | Waltham, MA USA      | to school and back every day -- |
|    ... In rhythm! | (617) 890-2175       | 20 miles, uphill both ways.     |
+-------------------+----------------------+---------------------------------+
|           ...! [ princeton mit-eddie bu sunne ] !polygen!jerry             |
|                            je...@polygen.com                               |
+----------------------------------------------------------------------------+

LINUX is obsolete peter da silva 06.02.92 08:02
In article <12...@star.cs.vu.nl> a...@cs.vu.nl (Andy Tanenbaum) writes:
> QNX is a microkernel
> based system, and someone just told me the installed base is 200,000 systems.

Oh yes, while I'm on the subject... there are over 3 million Amigas out there,
which means that there are more of them than any UNIX vendor has shipped, and
probably more than all UNIX systems combined.


--
-- Peter da Silva,  Ferranti International Controls Corporation
-- Sugar Land, TX  77487-5012;  +1 713 274 5180
-- "Have you hugged your wolf today?"
LINUX is obsolete peter da silva 06.02.92 08:00
In article <1992Feb5....@wpi.WPI.EDU> ent...@wintermute.WPI.EDU (Lawrence C. Foard) writes:
> Actually my main problem with OS theorists is that they have never tested
> there ideas!

I beg to differ... there are many microkernel operating systems out there
for everything from an 8088 (QNX) up to large research systems.

> None of these ideas (with a partial exception for MACH) has ever
> seen the light of day. 32 bit home computers have been available for almost a
> decade and Linus was the first person to ever write a working OS for them
> that can be used without paying AT&T $100,000.

I must have been imagining AmigaOS, then. I've been using a figment of my
imagination for the past 6 years.

AmigaOS is a microkernel message-passing design, with better response time
and performance than any other readily available PC operating system: including
MINIX, OS/2, Windows, MacOS, Linux, UNIX, and *certainly* MS-DOS.

The microkernel design has proven invaluable. Things like new file systems
that are normally available only from the vendor are hobbyist products on
the Amiga. Device drivers are simply shared libraries and tasks with specific
entry points and message ports. So are file systems, the window system, and
so on. It's a WONDERFUL design, and validates everything that people have
been saying about microkernels. Yes, it takes more work to get them off the
ground than a coroutine based macrokernel like UNIX, but the versatility
pays you back many times over.

I really wish Andy would do a new MINIX based on what has been learned since
the first release. The factoring of responsibilities in MINIX is fairly poor,
but the basic concept is good.

> The general consensus that Micro kernels is the way to go means nothing when
> a real application has never even run on one.

I'm dreaming again. I sure throught Deluxe Paint, Sculpt 3d, Photon Paint,
Manx C, Manx SDB, Perfect Sound, Videoscape 3d, and the other programs I
bought for my Amiga were "real". I'll have to send the damn things back now,
I guess.

The availability of Linux is great. I'm delighted it exists. I'm sure that
the macrokernel design is one reason it has been implemented so fast, and this
is a valid reason to use macrokernels. BUT... this doesn't mean that
microkernels are inherently slow, or simply research toys.


--
-- Peter da Silva,  Ferranti International Controls Corporation
-- Sugar Land, TX  77487-5012;  +1 713 274 5180
-- "Have you hugged your wolf today?"
LINUX is obsolete Tim W Smith 06.02.92 17:30
Andy Tanenbaum (a...@cs.vu.nl) writes:
> The drivers have to read and write the device registers in I/O space, and
> this cannot be done in user mode on the 286 and 386. If it were possible
> to do I/O in a protected way in user space, all the I/O tasks could have
> been user programs, like FS and MM.

On the 386, you could run the drivers in V86 mode, which sort of counts
as user mode and allows access to I/O registers if the kernel sets things
up to allow this.

                                                        Tim Smith

LINUX is obsolete Tim W Smith 06.02.92 18:09
> Actually my main problem with OS theorists is that they have never tested
> there ideas! None of these ideas (with a partial exception for MACH) has ever
> seen the light of day. 32 bit home computers have been available for almost a
> decade and Linus was the first person to ever write a working OS for them
> that can be used without paying AT&T $100,000. A piece of software in hand is

How about Netware 386 from Novell?  It seems to work.

                                                        Tim Smith

LINUX is obsolete Richard Tobin 07.02.92 06:58
In article <12...@star.cs.vu.nl> a...@cs.vu.nl (Andy Tanenbaum) writes:
>If you just want to USE the system, instead of hacking on its
>internals, you don't need source code.

Unfortunately hacking on the internals is just what many of us want
the system for...  You'll be rid of most of us when BSD-detox or GNU
comes out, which should happen in the next few months (yeah, right).

-- Richard
--
Richard Tobin,
AI Applications Institute,                                R.T...@ed.ac.uk
Edinburgh University.

LINUX is obsolete bert thompson 07.02.92 23:43
feustel@netcom.COM (David Feustel) writes:

>That's ok. Einstein got lousy grades in math and physics.

        no he didn't.

        bert.

LINUX is obsolete Rogier Wolff 08.02.92 01:13
a...@cs.vu.nl (Andy Tanenbaum) writes:
>In article <1992Feb5....@wpi.WPI.EDU> ent...@wintermute.WPI.EDU (Lawrence C. Foard) writes:
>>Actually my main problem with OS theorists is that they have never tested
>>there ideas!
>I'm mortally insulted.  I AM NOT A THEORIST.  Ask anybody who was at our
>department meeting yesterday (in joke).
>Actually, these ideas have been very well tested in practice.  

The problem is that to really do an unbiased test you would need two
*identical* teams, and ask them to make two OS's, for the same
destination machine, one using a microkernel architecture, and the other
using the monolithic approach. This is in practice not feasable and the
publications on the subject can only shout: "look: I've got a good
performance using a microkernel", "we've got very good performance using
a monolithic aproach" or "it only took us X months to implement this OS"

If people did benchmark their OS's they wrote the OS for one architecture,
and adapted it to test the other. This adaptation will naturally degrade
performance, and show that the designers were right in the first place.

Anyway, anybody have an opinion about the fact that code for printf
is included three times in the Minix OS when it runs (once in the
kernel, MM and FS)

                                                        Roger

--
If the opposite of "pro" is "con", what is the opposite of "progress"?
        (stolen from  kadokev@iitvax ==? tech...@iitmax.iit.edu)
EMail:  wo...@duteca.et.tudelft.nl   ** Tel  +31-15-783644 or +31-15-142371

LINUX is obsolete David Megginson 08.02.92 07:04
In article <1992Feb08.0...@donau.et.tudelft.nl> wo...@neuron.et.tudelft.nl (Rogier Wolff) writes:
>Anyway, anybody have an opinion about the fact that code for printf
>is included three times in the Minix OS when it runs (once in the
>kernel, MM and FS)

Back in the yore days, this might have been a problem. I remember when
every program, even wordprocessors, had to be written in assembler to
squeeze them down to the smallest size possible for a 64K system. One
of the reasons WordPerfect is such a mess today is that it was written
in assembler instead of C.

Now, even the small systems which Minix runs have at least 640K, so a
few wasted bytes are not so much of a problem.

Why not write Linux in 80386 assembler? It would be smaller and even
faster. And don't forget to code inline as much as possible, to avoid
the crippling overhead of function calls. And leave out comments,
because they waste disk space.


David

#################################################################
David Megginson                  meg...@epas.utoronto.ca
Centre for Medieval Studies      da...@doe.utoronto.ca
University of Toronto            39 Queen's Park Cr. E.
#################################################################

LINUX is obsolete Kevin Brown 09.02.92 01:02
In article <1992Feb6.0...@wpi.WPI.EDU> ent...@wintermute.WPI.EDU (Lawrence C. Foard) writes:
>In article <1992Feb3.0...@menudo.uh.edu> ke...@taronga.taronga.com (Kevin Brown) writes:
>>Dr. Tanenbaum claims that the microkernel architecture is the way to go.
>>He has a great deal more experience with operating systems than I have.
>>It's an understatement that it's likely that there's some substance to
>>his statement.  :-)
>
>I tend to prefer seeing for myself rather than accepting "expert" opinion.
>Microkernels are nice asthetically, but there are times when practical issues
>must also be considered :)

I agree.  This is why I qualified my statement the way I did.  :-)

Having seen both monolithic and microkernel architectures running, though,
I tend to agree that microkernels are generally the way to go, all other
things being equal.

But as you say, all things are not always equal.  That's when it becomes
a judgement call.  Which is better?  Depends on what you're trying to do.

>I've been told by people who have used both that Linux is significantly
>faster. There are certainly several factors involved (certainly using 32 bits
>helps alot), but the multithreading also makes for much lower overhead.

Yup.  I think that if Minix were arranged so that it had message queueing
and a true multithreaded filesystem, it might be comparable to a monolithic
kernel in terms of speed.  It's hard for me to say, though.  I haven't
played around much with multithreaded filesystems, so I don't know how
hard it is to make them work efficiently.  I'd think, though, that it would
depend enormously on how efficient your device drivers were, and how much
data copying you'd have to do (ideally, you'd pass references to the data
buffers around and do your actual data transfers directly to the user's
buffer).

>>However, as a *multiuser* operating system, i.e. an operating system designed


>>to efficiently meet the needs of multiple users simultaneously while also
>>performing batch operations, Minix is lacking, as far as I'm concerned.  
>>The main reason, of course, is the single-threaded file system (hereafter,
>>STFS).  Now, Dr. Tanenbaum may feel that a multi-threaded file system
>>(hereafter, MTFS) is merely a performance hack.
>
>I think this is a very valid problem. There are two ways a single threaded FS
>could work and both have substantial problems. If the FS blocks while waiting
>for I/O it would be completely unusable for "real" work. Imagine several users
>accessing a database, if the FS blocks for I/O they will have to wait
>eventhough the data they are looking for is already in the cache. If it is
>designed to be non blocking then it is even more complicated than a
>multithreaded FS and will have more overhead. I hope it is atleast the second

I haven't gone deeply into the source code of the Minix file system, but
the impression I get from my perusing of it is that it blocks on disk I/O
but not on terminal I/O, the idea being that disk I/O requests will almost
always be satisfied relatively soon after they are made, whereas terminal
I/O requests can take an indefinite amount of time to satisfy.

But it seems to me that if you're going to implement the mechanism to handle
I/O where the file system doesn't block waiting for it, why not use that
mechanism universally???

>>However much Minix may be lacking from my standpoint, it is nevertheless
>>a very useful and welcome enhancement to my system.  In spite of the
>>impression that I may have given everyone in my last posting, there will
>>always be a soft spot in my heart for it, if only because it's the first
>>decent operating system I've had on my system that I've had source to.
>>I don't have to tell you people how incredibly useful it is to have source.
>>You already know.
>
>I will agree here, Minix is infinitly better than Messy-Loss :)

Which is why I try to avoid using MS-DOS whenever possible.  I'll bet a
lot of us Minixers do the same.  :-)

>>Given the reasons Linus wrote Linux, I think it's hard for anyone to fault
>>him for writing it the way he did.  And he was extremely nice in making
>>his code freely available to the rest of the world.  It's not something he
>>had to do.  In my book, that makes him almost beyond reproach.
>
>I think more effort has been put into making practical use of Linux possible.
>An educational OS is nice, but there is a world outside of colleges that
>is suffering from the lack of cheap and useful OS's, I've been stuck doing
>most consulting work in Messy Loss because customers don't want to fork out
>$1000 for UNIX.

Even students can make good use of something like Linux.  I have 8 megabytes
of RAM on my machine, and 410 meg of harddrive space.  Yet I can barely
run SBProlog on my system, even though my system is considerably more macho
than most.  If I had demand paging on my system, this wouldn't be a problem,
but the only patches I have for demand paging seem not to work very well.
Once Linux becomes more stable (and gets support for Seagate ST-02 SCSI),
I'll snag the sources and check it out.  Since I already own Minix, I can
legally transport *everything* over to it, and since both share the same
filesystem layout, I can do the transporting with a minimum of hassle.

>>Dr. Tanenbaum didn't make Minix free.  His goals were different.  Minix
>>is a teaching aid above all else (unless Dr. Tanenbaum has changed his
>>views about Minix :-).  That means that he must be concerned with the
>>most efficient way to get Minix to the student population.  At the time
>>Minix was released, Prentice-Hall was a good solution, and has been for
>>some time.  However, I must wonder whether or not this is still the case.
>>Dr. Tanenbaum: do you still feel that free distribution of Minix via the
>>net is not the best way to distribute Minix?
>
>I would guess that Prentice-Hall would have some objections :)

No doubt.  :-(


--
Kevin Brown                                                Disclaimer: huh?
ke...@taronga.com                                  ke...@nuchat.sccsi.com

LINUX is obsolete peter da silva 09.02.92 19:10
In article <1992Feb08.0...@donau.et.tudelft.nl> wo...@neuron.et.tudelft.nl (Rogier Wolff) writes:
> The problem is that to really do an unbiased test you would need two
> *identical* teams, and ask them to make two OS's [...]

No, you don't. I don't think there's any question that a macrokernel is
very easy to get decent performance out of. Where the microkernel design
has a major advantage is in flexibility. Adding stuff to a macrokernel
is fairly complex and quickly becomes pretty gross. Look at BSD or System V
for examples. Adding stuff to a well designed microkernel is VERY easy.

Sometimes you don't want to compare oranges and oranges. Sometimes you want
to compare concentrated orange juice with fresh-squeezed. Fresh-squeezed
takes longer, but it's worth it.

Plus, with a microkernel you can get much better context switching between
microtasks than macro processes. So you can do stuff in separate processes
that would be out of the question in a macrokernel, and avoid nonsense like
the myriad inconsistencies in NFS.

> anyone have an opinion about why the code for printf


> is included three times in the Minix OS when it runs (once in the
> kernel, MM and FS)

Anyone have an opinion why the code for printf is included only once in
AmigaOS (even though the AmigaOS 2.04 "kernel" is actually a dozen or
more separate processes)?

Minix is a poor technology demonstrator for microkernels. Which is OK, since
it wasn't supposed to be one.


--
-- Peter da Silva,  Ferranti International Controls Corporation
-- Sugar Land, TX  77487-5012;  +1 713 274 5180
-- "Have you hugged your wolf today?"
LINUX is obsolete Dave Smythe 09.02.92 23:08
In article <1992Feb5....@wpi.WPI.EDU> ent...@wintermute.WPI.EDU (Lawrence C. Foard) writes:
>Actually my main problem with OS theorists is that they have never tested
>there ideas! None of these ideas (with a partial exception for MACH) has ever
>seen the light of day.

David Cheriton (Prof. at Stanford, and author of the V system) said something
similar to this in a class in distributed systems.  Paraphrased:

  "There are two kinds of researchers: those that have implemented
   something and those that have not.  The latter will tell you that
   there are 142 ways of doing things and that there isn't consensus
   on which is best.  The former will simply tell you that 141 of
   them don't work."

He really rips on the OSI-philes as well, for a similar reason.  The Internet
protocols are adapted only after having been in use for a period of time,
preventing things from getting standardized that will never be implementable
in a reasonable fashion.  OSI adherents, on the other hand, seem intent on
standardizing everything possible, including "escapes" from the standard,
before a reasonable reference implementation exists.  Consequently, you see
obsolete ideas immortalized, such as sub-byte-level data field packing,
which makes good performance difficult when your computer is drinking from
a 10+ Gbs fire-hose :-).

Just my $.02

D

--
========================================================================
Dave Smythe   N6XLP    dsm...@netcom.com (also dsm...@cs.stanford.edu)

LINUX is obsolete Bill Mitchell 10.02.92 07:03
in comp.os.minix, dsmythe@netcom.COM (Dave Smythe) said:

>In article <1992Feb5....@wpi.WPI.EDU> ent...@wintermute.WPI.EDU (Lawrence C. Foard) writes:
>
>David Cheriton (Prof. at Stanford, and author of the V system) said something
>similar to this in a class in distributed systems.  Paraphrased:
>
>  "There are two kinds of researchers: those that have implemented
>   something and those that have not.  The latter will tell you that
>   there are 142 ways of doing things and that there isn't consensus
>   on which is best.  The former will simply tell you that 141 of
>   them don't work."
>

Yeah, but what's the odds on two who have implemented something differently
agreeing on which 141 don't work?

--
mitc...@mdd.comm.mot.com (Bill Mitchell)

LINUX is obsolete Christopher Stuart 11.02.92 05:52
Article 18297 of comp.os.minix:
Path: icdoc!uknet!mcsun!uunet!cis.ohio-state.edu!rutgers!news-server.csri.toronto.edu!utgpu!news-server.ecf!epas!meggin
From: meg...@epas.utoronto.ca (David Megginson)
Newsgroups: comp.os.minix
Subject: Re: LINUX is obsolete
Message-ID: <1992Feb8.1...@epas.toronto.edu>
Date: 8 Feb 92 15:04:31 GMT
References: <1992Feb5....@wpi.WPI.EDU> <12...@star.cs.vu.nl> <1992Feb08.0...@donau.et.tudelft.nl>
Sender: ne...@epas.toronto.edu (USENET)
Organization: University of Toronto - EPAS
Lines: 28
Nntp-Posting-Host: epas.utoronto.ca


David


--
/*----------------------------------------------------------------------------*/
/*  Christopher Stuart:  c...@doc.ic.ac.uk                                      */
/*                         Dept. Computing, Imperial College, London.              */
/*----------------------------------------------------------------------------*/

Re: LINUX is obsolete Omniscientist 10.01.05 06:02
Well, sorry Ken...but it looks like microkernels didn't last as long as
you thought. I'm on Linux right now...and minix isn't anywhere in
sight..
Re: LINUX is obsolete the...@gmail.com 11.01.05 01:34
i'm glad I read this.

Kevin Brown wrote:
> In article <12...@star.cs.vu.nl> a...@cs.vu.nl (Andy Tanenbaum)
writes:
> >
> >I was in the U.S. for a couple of weeks, so I haven't commented much
on
> >LINUX (not that I would have said much had I been around), but for
what
> >it is worth, I have a couple of comments now.
> >
> >As most of you know, for me MINIX is a hobby, something that I do in
the
> >evening when I get bored writing books and there are no major wars,
> >revolutions, or senate hearings being televised live on CNN.  My
real
> >job is a professor and researcher in the area of operating systems.
> >
> >As a result of my occupation, I think I know a bit about where
operating
> >are going in the next decade or so.  Two aspects stand out:
> >
> >1. MICROKERNEL VS MONOLITHIC SYSTEM
> >   Most older operating systems are monolithic, that is, the whole
operating
> >   system is a single a.out file that runs in 'kernel mode.'  This
binary
> >   contains the process management, memory management, file system
and the
> >   rest. Examples of such systems are UNIX, MS-DOS, VMS, MVS,
OS/360,
> >   MULTICS, and many more.
> >
> >   The alternative is a microkernel-based system, in which most of
the OS
> >   runs as separate processes, mostly outside the kernel.  They
communicate
> >   by message passing.  The kernel's job is to handle the message
passing,
> >   interrupt handling, low-level process management, and possibly
the I/O.
> >   Examples of this design are the RC4000, Amoeba, Chorus, Mach, and
the
> >   not-yet-released Windows/NT.
> >
> >   While I could go into a long story here about the relative merits
of the
> >   two designs, suffice it to say that among the people who actually
design
> >   operating systems, the debate is essentially over.  Microkernels
have won.
> >   The only real argument for monolithic systems was performance,
and there
> >   is now enough evidence showing that microkernel systems can be
just as
> >   fast as monolithic systems (e.g., Rick Rashid has published
papers comparing
> >   Mach 3.0 to monolithic systems) that it is now all over but the
shoutin`.
>
> Of course, there are some things that are best left to the kernel, be
it
> micro or monolithic.  Like things that require playing with the
process'
> stack, e.g. signal handling.  Like memory allocation.  Things like
that.
>
> The microkernel design is probably a win, all in all, over a
monolithic
> design, but it depends on what you put in the kernel and what you
leave
> out.
>
> >   MINIX is a microkernel-based system.  The file system and memory
management
> >   are separate processes, running outside the kernel.  The I/O
drivers are
> >   also separate processes (in the kernel, but only because the
brain-dead
> >   nature of the Intel CPUs makes that difficult to do otherwise).
>
> Minix is a microkernel design, of sorts.  The problem is that it
gives special
> priveleges to mm and fs, when there shouldn't be any (at least for
fs).  It
> also fails to integrate most of the functionality of mm in the kernel
itself,
> and this makes things like signal handling and memory allocation
*really*
> ugly.  If you did these things in the kernel itself, then signal
handling
> would be as simple as setting a virtual interrupt vector and causing
the
> signalled process to receive that interrupt (with the complication
that
> system calls might have to be terminated.  Which means that a message
would
> have to be sent to every process that is servicing the process'
system call,
> if any.  It's considerations like these that make the monolithic
kernel
> design appealing).
>
> The *entire* system call interface in Minix needs to be rethought.
As it
> stands right now, the file system is not just a file system, it's
also a
> system-call server.  That functionality needs to be separated out in
order
> to facilitate a multiple file system architecture.  Message passing
is
> probably the right way to go about making the call and waiting for
it, but
> the message should go to a system call server, not the file system
itself.
>
> In order to handle all the special caveats of the Unix API, you end
up writing
> a monolithic "kernel" even if you're using a microkernel base.  You
end up
> with something called a "server", and an example is the BSD server
that runs
> under Mach.
>
> And, in any case, the message-passing in Minix needs to be completely
redone.
> As it is, it's a kludge.  I've been giving this some thought, but I
haven't
> had time to do anything with what I've thought of so far.  Suffice it
to say
> that the proper way to do message-passing is probably with message
ports
> (both public and private), with the various visible parts of the
operating
> system having public message ports.  Chances are, that ends up being
the
> system call server only, though this will, of course, depend on the
goals
> of the design.
>
> >   LINUX is
> >   a monolithic style system.  This is a giant step back into the
1970s.
> >   That is like taking an existing, working C program and rewriting
it in
> >   BASIC.  To me, writing a monolithic system in 1991 is a truly
poor idea.
>
> Depends on the design criteria, as you should know.  If your goal is
to
> design a Unix workalike that is relatively simple and relatively
small,
> then a monolithic design is probably the right approach for the job,
because
> unless you're designing for really backwards hardware, the problems
of
> things like interrupted system calls, memory allocation within the
kernel
> (so you don't have to statically allocate *everything* in your OS),
signal
> handling, etc. all go away (or are at least minimized) if you use a
> monolithic design.  If you want the ability to bring up and take down
> file systems, add and remove device drivers, etc., all at runtime,
then
> a microkernel approach is the right solution.
>
> Frankly, I happen to like the idea of removable device drivers and
such,
> so I tend to favor the microkernel approach as a general rule.
>
> >2. PORTABILITY
> >   Once upon a time there was the 4004 CPU.  When it grew up it
became an
> >   8008.  Then it underwent plastic surgery and became the 8080.  It
begat
> >   the 8086, which begat the 8088, which begat the 80286, which
begat the
> >   80386, which begat the 80486, and so on unto the N-th generation.
In
> >   the meantime, RISC chips happened, and some of them are running
at over
> >   100 MIPS.  Speeds of 200 MIPS and more are likely in the coming
years.
> >   These things are not going to suddenly vanish.  What is going to
happen
> >   is that they will gradually take over from the 80x86 line.  They
will
> >   run old MS-DOS programs by interpreting the 80386 in software.
(I even
> >   wrote my own IBM PC simulator in C, which you can get by FTP from
> >   ftp.cs.vu.nl =  192.31.231.42 in dir minix/simulator.)  I think
it is a
> >   gross error to design an OS for any specific architecture, since
that is
> >   not going to be around all that long.
>
> Again, look at the design criteria.  If portability isn't an issue,
then
> why worry about it?  While LINUX suffers from lack of portability,
portability
> was obviously never much of a consideration for its author, who
explicitly
> stated that it was written as an exercise in learning about the 386
> architecture.
>
> And, in any case, while MINIX is portable in the sense that most of
the code
> can be ported to other platforms, it *still* suffers from the
limitations of
> the original target machine that drove the walk down the design
decision tree.
> The message passing is a kludge because the 8088 is slow.  The kernel
doesn't
> do memory allocation (thus not allowing FS and the drivers to get
away with
> using a malloc library or some such, and thus causing everyone to
have to
> statically allocate everything), probably due to some other
limitation of
> the 8088.  The very idea of using "clicks" is obviously the result of
the
> segmented architecture of the 8088.  The file system size is too
limited
> (theoretically fixed in 1.6, but now you have *two* file system
formats to
> contend with.  If having the file system as a separate process is
such a
> big win, then why don't we have two file system servers, eh?  Why
simply
> extend the existing Minix file system instead of implementing BSD's
FFS
> or some other high-performance file system?  It's not that I'm greedy
> or anything... :-).
>
> >   MINIX was designed to be reasonably portable, and has been ported
from the
> >   Intel line to the 680x0 (Atari, Amiga, Macintosh), SPARC, and
NS32016.
> >   LINUX is tied fairly closely to the 80x86.  Not the way to go.
>
> All in all, I tend to agree.

>
> >Don`t get me wrong, I am not unhappy with LINUX.  It will get all
the people
> >who want to turn MINIX in BSD UNIX off my back.  But in all honesty,
I would
> >suggest that people who want a **MODERN** "free" OS look around for
a
> >microkernel-based, portable OS, like maybe GNU or something like
that.
>
> Yeah, right.  Point me someplace where I can get a free "modern" OS
and I'll
> gladly investigate.  But the GNU OS is currently vaporware, and as
far as I'm
> concerned it will be for a LOOOOONG time to come.
>
> Any other players?  BSD 4.4 is a monolithic architecture, so by your
> definition it's out.  Mach is free, but the BSD server isn't (AT&T
code,
> you know), and in any case, isn't the BSD server something you'd
consider
> to be a monolithic design???
>
> Really.  Why do you think LINUX is as popular as it is?  The answer
is
> simple, of course: because it's the *only* free Unix workalike OS in
> existence.  BSD doesn't qualify (yet).  Minix doesn't qualify.  XINU
> isn't even in the running.  GNU's OS is vaporware, and probably will
> be for a long time, so *by definition* it's not in the running.  Any
> other players?  I haven't heard of any...
>
> >Andy Tanenbaum (a...@cs.vu.nl)
>
> Minix is an excellent piece of work.  A good starting point for
anyone who
> wants to learn about operating systems.  But it needs rewriting to
make it
> truly elegant and functional.  As it is, there are too many kludges
and
> hacks (e.g., the message passing).
>
>                                 Kevin Brown
Re: LINUX is obsolete Gary 12.01.05 18:04

On 10 Jan 2005, Omniscientist frothed:
> Well, sorry Ken...but it looks like microkernels didn't last as long as
> you thought.

Is there a particular reason that you're trolling this group by answering
13 year old posts by Ken Thompson?

By the way, have you taken a look at Mac OS X lately?
http://developer.apple.com/darwin/history.html
http://developer.apple.com/documentation/Porting/Conceptual/PortingUnix/additionalfeatures/chapter_10_section_8.html

> I'm on Linux right now...and minix isn't anywhere in sight..

No, you won't see Oracle 10 ported to Minix any time soon, and for good
reason; see quote below.

-Gary

http://www.prenhall.com/bookbind/pubbooks/tanenbaum/chapter0/custom4/deluxe-content.html

How would you compare MINIX and Linux?

From the very beginning, MINIX has been designed as a system for teaching
students and others about operating systems. An overriding concern has
always been to keep it simple enough for people to understand. In
particular, it is still simple enough that the complete core of the source
code is listed in the book and can be understood by a student in a one
semester course. Linux was designed as a production system and is
correspondingly much less suited to learning about operating systems.

Re: LINUX is obsolete chandan 14.01.05 03:11
hi,
im new to this forum. but i can say that there is not a single
operating system which had been designed for students except minix.
ofcourse linux is great in its own terms. but doesn't u think that
minix is well suited for distributed operating system concept.i liked
microkernel concept.  i certainly say that minix is student's os.
linux, too, was not great initially but the modification provided by
different groups and organization changed it into a major force.  i
know that minix is older than linux but former is not popular. reason
is itself cited in one of A. Tanen. article. (want to leave this os for
students).
Re: LINUX is obsolete Adrien Plisson 16.01.05 01:43
Gary wrote:
> Is there a particular reason that you're trolling this group by answering
> 13 year old posts by Ken Thompson?

i do have a question: why do i see a lot of greetings post for thorvald
on this newsgroup lately ? are these "10 years old posts revived by some
kind of bug" ? do people confuse minix and linux ?

thanks

--
rien

Re: LINUX is obsolete Adrien Plisson 16.01.05 01:49

well, i didn't read the post by Michael Black when i posted this. Now i
understand, sorry for disturbing...

--
rien

Re: LINUX is obsolete Tux Wonder-Dog 20.01.05 03:50
chandan wrote:

Actually, there's Switzer's Tunix, a more rigorously microkernel-based
teaching operating system than Minix - because practically everything's a
server in Tunix, versus not that many in Minix.  Switzer wrote his book and
OS for students.

And there is Wirth's Oberon, just to stir the pot a bit.  Wirth wrote his
experimental OS plus language for students.

And Per Brinch Hansen's Solo and associated Concurrent Pascal OSes, largely
written for students as well.

It seems to be quite wide-spread.

One of the OSes that _wasn't_ written for students, surprise, surprise, was
Unix.  It was a production system that got studied - read the Lions Book/s.

Having said that, if it hadn't been for the work of Andy Tanenbaum in
writing his book and Minix, the world would've missed out on Linux, since
it inspired Linus Torvalds to go one better; and I for one wouldn't have
learned as much about Operating Systems as I've managed to.

So, thanks to Andy Tanenbaum; and also thanks to Linus Torvalds.  And, did I
forget to mention, thank you to the entire BSD team through the years, etc?

Wesley Parish
--
"Good, late in to more rewarding well."  "Well, you tonight.  And I was
lookintelligent woman of Ming home.  I trust you with a tender silence."  I
get a word into my hands, a different and unbelike, probably - 'she
fortunate fat woman', wrong word.  I think to me, I justupid.
Let not emacs meta-X dissociate-press write your romantic dialogs...!!!

Re: LINUX is obsolete chandan 24.01.05 00:04
thanks tux. i hadn't any idea about these OSes. thanks again
Re: LINUX is obsolete burt 25.01.05 10:36 Re: LINUX is obsolete Moses 29.05.11 20:38

What a great thread!

Re: LINUX is obsolete Moses 29.05.11 20:42 unk...@googlegroups.com 13.03.12 05:46 <Diese Nachricht wurde gelöscht.> Re: LINUX is obsolete derrick....@gmail.com 13.03.12 05:59
This thread is impressive. The idea that I can read old USENET posts is absolutely amazing. The idea I can reply to them is even cooler.
First of all, Yes, Tanenbaum was wrong, but the discussion here will last far longer than LINUX will.

Second of all, for the first internet "flame war", this was likely the most mild flame war I've ever seen. If people were "Flaming", they were certainly more cordial!

As an aside, I wonder what happens when I post here. Does it only update to Google Groups, or am I actually posting on the USENET? (I'm pretty sure it's the latter, because Groups should allow me to post to the USENET proper, but I'm not sure).

Re: LINUX is obsolete max...@googlemail.com 15.03.12 04:19

Yes this is a legendary thread!

Re: LINUX is obsolete Tonton Th 20.03.12 03:41
On 03/13/2012 01:59 PM, derrick....@gmail.com wrote:

>
> As an aside, I wonder what happens when I post here.
 > Does it only update to Google Groups, or am I actually
 > posting on the USENET? (I'm pretty sure it's the latter,
 > because Groups should allow me to post to the USENET proper,
 > but I'm not sure).

    Welcome in the good old real Usenet.

--

                                 Nous vivons dans un monde étrange/
                                 http://foo.bar.quux.over-blog.com/

Re: LINUX is obsolete Srinivas Nayak 16.06.12 04:28

One thing I heard is, linux 2.6 kernel doesn't support many old hardwares.
What is that exactly?
Is it because some device drivers removed because of bulkiness of 2.6?

Re: LINUX is obsolete tth...@panvistamedia.com 29.06.12 14:57
On Tuesday, March 20, 2012 6:41:32 AM UTC-4, Tonton Th wrote:
> On 03/13/2012 01:59 PM, derrick....@gxxxx.com wrote:
>
> >
> > As an aside, I wonder what happens when I post here.
>  > Does it only update to Google Groups, or am I actually
>  > posting on the USENET? (I'm pretty sure it's the latter,
>  > because Groups should allow me to post to the USENET proper,
>  > but I'm not sure).
>
>     Welcome in the good old real Usenet.
I thought Google Groups quickly fixed this mis-feature?

But with respect to the topic of the thread:

  "MINIX was designed to be reasonably portable, and has been ported from the
   Intel line to the 680x0 (Atari, Amiga, Macintosh), SPARC, and NS32016.
   LINUX is tied fairly closely to the 80x86.  Not the way to go."
How things change. As far as Minix v3 is concerned, "all the world's an x86"; downloads don't even bother naming the single supported platform: http://www.minix3.org/download/index.html

Maybe it will eventually be ported to ARM. But portability in general is, sadly, no longer a goal for the project.

Re: LINUX is obsolete mr.fi...@gmail.com 20.09.12 18:07
Em quarta-feira, 29 de janeiro de 1992 11h23min33s UTC-2, ast  escreveu:
> I was in the U.S. for a couple of weeks, so I haven't commented much on
> LINUX (not that I would have said much had I been around), but for what
> it is worth, I have a couple of comments now.
>
>    MINIX is a microkernel-based system.  The file system and memory management
>    are separate processes, running outside the kernel.  The I/O drivers are
>    also separate processes (in the kernel, but only because the brain-dead
>    nature of the Intel CPUs makes that difficult to do otherwise).  LINUX is
>    a monolithic style system.  This is a giant step back into the 1970s.
>    That is like taking an existing, working C program and rewriting it in
>    BASIC.  To me, writing a monolithic system in 1991 is a truly poor idea.
>
>
> 2. PORTABILITY
>    Once upon a time there was the 4004 CPU.  When it grew up it became an
>    8008.  Then it underwent plastic surgery and became the 8080.  It begat
>    the 8086, which begat the 8088, which begat the 80286, which begat the
>    80386, which begat the 80486, and so on unto the N-th generation.  In
>    the meantime, RISC chips happened, and some of them are running at over
>    100 MIPS.  Speeds of 200 MIPS and more are likely in the coming years.
>    These things are not going to suddenly vanish.  What is going to happen
>    is that they will gradually take over from the 80x86 line.  They will
>    run old MS-DOS programs by interpreting the 80386 in software.  (I even
>    wrote my own IBM PC simulator in C, which you can get by FTP from
>    ftp.cs.vu.nl =  192.31.231.42 in dir minix/simulator.)  I think it is a
>    gross error to design an OS for any specific architecture, since that is
>    not going to be around all that long.
>
>    MINIX was designed to be reasonably portable, and has been ported from the
>    Intel line to the 680x0 (Atari, Amiga, Macintosh), SPARC, and NS32016.
>    LINUX is tied fairly closely to the 80x86.  Not the way to go.
>
> Don`t get me wrong, I am not unhappy with LINUX.  It will get all the people
> who want to turn MINIX in BSD UNIX off my back.  But in all honesty, I would
> suggest that people who want a **MODERN** "free" OS look around for a
> microkernel-based, portable OS, like maybe GNU or something like that.
>
>
> Andy Tanenbaum (a...@cs.vu.nl)
>
>
> P.S. Just as a random aside, Amoeba has a UNIX emulator (running in user
> space), but it is far from complete.  If there are any people who would
> like to work on that, please let me know.  To run Amoeba you need a few 386s,
> one of which needs 16M, and all of which need the WD Ethernet card.

Yeah, this is the famous and epic discussion between Linus Torvalds and Andrew S. Tanembaum about Linux kernel architecture and how Tanembaum thinks that monolithic kernels are inferior to microkernels.

Re: LINUX is obsolete mkam...@tvz.hr 28.09.12 06:45

Dana srijeda, 29. siječnja 1992. 14:23:33 UTC+1, korisnik ast napisao je:
Wow,wish that either Linus or Tannenbaum would be my teachers...If any of you two ever start to work for the Technical Polytechnic of Zagreb I will be very happy.

Re: LINUX is obsolete nimeto...@gmail.com 09.10.12 11:57

This shoulda been locked.

Re: LINUX is obsolete Martijn van Buul 10.10.12 05:07 Re: LINUX is obsolete delro...@gmail.com 22.10.12 17:35 Re: LINUX is obsolete Martijn van Buul 22.10.12 23:51 Re: LINUX is obsolete Edward A. Falk 23.10.12 17:37 Re: LINUX is obsolete Asad Dhamani 18.11.12 12:33

Welcome to 2012! 20 years, I have my own Linux distro! :D The future is now a thing of the past! Feels so great seeing so old threads!

Re: LINUX is obsolete amyas...@gmail.com 25.11.12 03:30

So I can post a reply right now for this historical flame war?? And Google will actually index my name along with ast and Linus? Wow :D

Re: LINUX is obsolete luke...@gmail.com 26.11.12 11:34

El miércoles, 29 de enero de 1992 09:20:50 UTC-5, David Megginson  escribió:
> I would like to at least look at LINUX, but I cannot, since I run
> a 68000-based machine. In any case, it is nice having the kernel
> independent, since patches like the multi-threaded FS patch don't
> have to exist in a different version for each CPU.
>
> I second everything AST said, except that I would like to see
> the kernel _more_ independent from everything else. Why does the
> Intel architecture _not_ allow drivers to be independent programs?
>
> I also don't like the fact that the kernel, mm and fs share the
> same configuration files. Since they _are_ independent, they should
> have more of a sense of independence.
>
>
> David
>
> #################################################################
> David Megginson                  meg...@epas.utoronto.ca
> Centre for Medieval Studies      da...@doe.utoronto.ca
> University of Toronto            39 Queen's Park Cr. E.
> #################################################################

El miércoles, 29 de enero de 1992 09:20:50 UTC-5, David Megginson  escribió:
> I would like to at least look at LINUX, but I cannot, since I run
> a 68000-based machine. In any case, it is nice having the kernel
> independent, since patches like the multi-threaded FS patch don't
> have to exist in a different version for each CPU.
>
> I second everything AST said, except that I would like to see
> the kernel _more_ independent from everything else. Why does the
> Intel architecture _not_ allow drivers to be independent programs?
>
> I also don't like the fact that the kernel, mm and fs share the
> same configuration files. Since they _are_ independent, they should
> have more of a sense of independence.
>
>
> David
>
> #################################################################
> David Megginson                  meg...@epas.utoronto.ca
> Centre for Medieval Studies      da...@doe.utoronto.ca
> University of Toronto            39 Queen's Park Cr. E.
> #################################################################

ast, time proved you were wrong, yet, you are the best dinosaur I ever know,
greetings from cuba.

Skywalker.

Re: LINUX is obsolete Martijn van Buul 27.11.12 02:33 Re: LINUX is obsolete caront...@hotmail.com 30.12.12 14:32

El miércoles, 29 de enero de 1992 06:23:33 UTC-7, ast  escribió:

Re: LINUX is obsolete sk...@ns.sympatico.ca 01.02.13 03:14
On Wednesday, October 10, 2012 9:07:26 AM UTC-3, Martijn van Buul wrote:

> > This shoulda been locked.
>
>
>
> This isn't a forum.
>

it's usenet, afaik a non-moderated newsgroup can't really have locked threads, violates the rules.
Re: LINUX is obsolete Martijn van Buul 01.02.13 03:53
* sk...@ns.sympatico.ca:
> it's usenet, afaik a non-moderated newsgroup can't really have locked
> threads, violates the rules.
It's not a question of "rules", it's a question of not being capable of.

The rules, if any, state you shouldn't post articles with a line length of
more than 80 characters ;)

Re: LINUX is obsolete derrick....@gmail.com 17.04.13 09:31

I feel like I'm a part of history.
I wasn't even born when this thread opened...

Re: LINUX is obsolete syn4...@gmail.com 25.04.13 10:18

Me too. :D

Re: LINUX is obsolete usama...@gmail.com 28.04.13 01:57

I am very surprised to read this thread.
Was there Google and Google groups in 1992.....?

[OT] Archives [Was: LINUX is obsolete] Antoine Leca 29.04.13 03:58
usama...@gmail.com wrote:
> I am very surprised to read this thread.
> Was there Google and Google groups in 1992.....?
Not exactly; there was known as Usenet then; and Google Groups just
bought the Usenet archives (through DejaNews) in 2001 IIRC.
        http://www.google.com/googlegroups/archive_announce_20.html

And since Usenet does allow threads to last and last, and since Pr
Tanenbaum did not put an "Expire" header on his post, Google Groups
offer you the opportunity to post to that thread. Even posts completely
off-topic to that thread (or to this group comp.os.minix as well.)

Antoine

Re: [OT] Archives [Was: LINUX is obsolete] Michael Black 30.04.13 12:35
No.

Google moved to a new interface, and allowed replies to messages older
than 30 days.  30 days wsa the way from the beginning, and really is
proper for old messages.  They had it that way from when they took over
the dejanews archive.  A previous time they changed the interface, and put
the bug in, but after complaints it was fixed.  I have no idea if they
removed the vandalism to old threads, but that time back some years did
cause stupid responses just like now, the idiots attracted to a thread
that google itself has pointed out as "historic", at one point they put up
a timeline of usenet and pointed to specific posts, including this one.

I'm tired of fighting with google, so I've never bothered complaining this
recent tiem. Their current interface may be fine for a web-based
newsgroup, but it's not right at all for Usenet.  It doesn't even show
dates, or even where the message is posted to (so nobody knows about
cross-posting.

There is no reason to reply to old messages.  The conversation has moved
on, if someone saved the message at the time and gets around to it later,
that's different from someone replhing through google years later.  The
posters just drop the thread into the newsgroup, they are oblivious to
where it's going.  They often don't even quote.  Suddenly a mystery
message appears in the newsgroup without context, people reply without
even wondering where the rest of the thread is.

The replies usually offer nothing new, they ignore that the thread back 20
years ago had plenty of answers, likely sufficient.  The replies ignore
the fact that whoever posted back when may no longer be reading the
newsgroup.  Indeed, when some idiot replies to an old message that was
offereing something for sale, the original poster may not have been there
except to post his ad.

All of these replies are from idiots who think it's "cool" to reply to an
old thread, like they were actually around 20 years ago.  They haven't
even added anything useful, just a bunch of "me toos".

The only reason I've not bothered replying to condemn these idiots is
because this is a historical thread, and I didn't want to add my
vandalism.  But since you've just given approval to it, I had to speak
out.  Don't ecnourage the idiots.

As an aside, deajnews only started in 1995 or 96.  Their archive is what
google bought.  But then google tracked down older archives of a more
limited nature, putting them together, which is why there is now an
incomplete archive going back to  the start of Usenet, 1979.  It's those
other archives that kept this thread, and the vandalism is because of
google, and because of the idiots who reply.

   Michael

Re: [OT] Archives [Was: LINUX is obsolete] colone...@yahoo.com 03.05.13 08:56 Re: LINUX is obsolete anoncom...@gmail.com 06.05.13 16:55

I am replying... to history.

Re: LINUX is obsolete Mahalingam P.R 12.07.13 02:41
On Tuesday, May 7, 2013 5:25:37 AM UTC+5:30, anoncom...@gmail.com wrote:
> I am replying... to history.
Me too....
Re: LINUX is obsolete Chris Card 29.01.14 08:29
well computers will be obsolet.

And about history, I've felt a turn in 1986, before there were plenty of processors and OSs, then a kind of normalisation came. To me that would be awesome if MINIX is ported to tablets cos I'm not satisfied with android and iOs. Google is getting crazy and iPhone is totally locked.

Re: LINUX is obsolete jub...@gmail.com 05.04.14 16:39

Linus, I have to tell you about the future!

Re: LINUX is obsolete tucker 20.07.14 21:46 Re: LINUX is obsolete theadve...@gmail.com 06.09.14 21:24
Fellow programmer stopping by.
In a new era, now.
Almost wishing I was in this one.
So I could focus on operating systems, compilers and tool chains.
Instead, I'm designing websites.

Thank you all for the history you have created before me.
I'll window shop into the past.
And move toward the future...

Re: LINUX is obsolete rakesh.m...@gmail.com 14.12.14 01:08
On Wednesday, January 29, 1992 6:53:33 PM UTC+5:30, ast wrote:
It has been almost 13 years since you made this post, LINUX kernal has withstood the test of time and has given the world the best operating system as a part of the GNU system. GNU/LINUX is used across almost all the supercomputers and servers across the world. Many PCs also run on GNU/LInux and the numbers are rising. The most popular mobile operating system Android has a kernal which is a modified version of the monolithic Linux kernal. Its amazing how a 21 year old university student beat a world renouned OS resercher .
Re: LINUX is obsolete jib...@gmail.com 25.01.15 14:05
> It has been almost 13 years since you made this post [...] Its amazing how a 21 year old university student beat a world renouned OS resercher .

What's amazing is that you don't know how to count, yet criticize the work of others.

Re: LINUX is obsolete cmgd...@gmail.com 11.03.15 17:40

Hello World

Re: LINUX is obsolete cmgd...@gmail.com 11.03.15 17:40

Hello World

Re: LINUX is obsolete s...@amritahyd.org 13.03.15 00:07

very nice post.................

Re: LINUX is obsolete s...@amritahyd.org 13.03.15 00:38

nice  post

Re: LINUX is obsolete dmwicke...@gmail.com 22.03.15 18:14
> P.S. Just as a random aside, Amoeba has a UNIX emulator (running in user
> space), but it is far from complete.  If there are any people who would
> like to work on that, please let me know.  To run Amoeba you need a few 386s,
> one of which needs 16M, and all of which need the WD Ethernet card.
Greetings Andy. I am replying to your 23-year-old post.
Re: LINUX is obsolete hua...@gmail.com 08.06.15 01:06

You are 666

Re: LINUX is obsolete bapak...@gmail.com 08.06.15 09:17

this is from 23 years ago? wooaahh...

Re: LINUX is obsolete Martin 25.06.15 21:16

Posting in historic thread

Re: LINUX is obsolete romap...@gmail.com 20.08.15 13:17
Em sexta-feira, 26 de junho de 2015 01:16:28 UTC-3, r35erv...@gmail.com  escreveu:
> Posting in historic thread

Linux still is obsolete?

Re: LINUX is obsolete stenio...@ccc.ufcg.edu.br 27.08.15 12:47

I wish I had the same age I have now when this tread started. Such a good motivation for studying.
#PostForHistory

Re: LINUX is obsolete foretol...@gmail.com 02.11.15 21:09
Just wanted to make it clear that: Linux, despite everything you may now know some 20+ years later, is still obsolete.

Oh wait, I meant to say that you're complete WRONG. 20 years in to the future. You. Are. Wrong.

Let me use modern conventions to explain this to you:

#WRECKED.

That's what you are. #WRECKED.

That's right. I'm from the future. I am judging you from 2015. You are #WRECKED and there's nothing you can do about it but watch your opinions, baseless and infantile, crumble in to the void of their own worth. God, I bet you really feel like an idiot now. Good night sweet prince.

Bathe in the glory of what has occurred. You are the lord of emptiness.

Re: LINUX is obsolete John Doe 03.11.15 21:23
Wow, that's pretty harsh.

I only hope the poor soul will be able to find burn cream when he sees this.

Re: LINUX is obsolete Dominic Richens 18.12.15 08:33

Pretty funny reading this on an ARM based cell phone which is
essentially running Linux.

Diese Nachricht wurde ausgeblendet, weil sie als missbräuchlich gekennzeichnet wurde. Diese Nachricht wurde ausgeblendet, weil sie als missbräuchlich gekennzeichnet wurde. unk...@googlegroups.com 18.04.16 12:59 <Diese Nachricht wurde gelöscht.> LINUX is obsolete ejorda...@yahoo.com 28.07.16 13:00

Hello, are you still on the web? I doubt it, I'm sure you've left it by now and have much better things to do with your life now but I have to say, thank you for making history. It's so amazing to see the aftermath of a 24-year old flame war, 8 years before my own birth, with my own eyes. Even if it's much different than the traditional ones we have today. Thanks again for creating art. Goodbye ;)

unk...@googlegroups.com 02.08.16 11:13 <Diese Nachricht wurde gelöscht.> unk...@googlegroups.com 02.08.16 11:20 <Diese Nachricht wurde gelöscht.> Re: LINUX is obsolete ejorda...@yahoo.com 03.08.16 09:58

What kind of poo

Re: LINUX is obsolete phillip 08.11.16 01:07
On Wednesday, 29 January 1992 23:12:50 UTC+11, ast  wrote:
> Andy Tanenbaum (a...@cs.vu.nl)
>
>
> P.S. Just as a random aside, Amoeba has a UNIX emulator (running in user
> space), but it is far from complete.  If there are any people who would
> like to work on that, please let me know.  To run Amoeba you need a few 386s,
> one of which needs 16M, and all of which need the WD Ethernet card.
It's offical Linux won , woot
Re: LINUX is obsolete Chris Card 13.11.16 07:07

trump also won, so what ?

Re: LINUX is obsolete Krunalkumar Shah 13.11.16 10:07

Nice and impressive question.

Re: LINUX is obsolete colone...@yahoo.com 14.11.16 17:58
Is troll.
Don't feed.
Re: LINUX is obsolete day...@gmail.com 04.01.17 00:03
Le lundi 10 janvier 2005 15:02:10 UTC+1, Omniscientist a écrit :
> Well, sorry Ken...but it looks like microkernels didn't last as long as
> you thought. I'm on Linux right now...and minix isn't anywhere in
> sight..

Well sorry whatever your name is, coz minix seems to be still on : http://wiki.minix3.org/doku.php?id=www:getting-started:start

wish you a pleasant death

Re: LINUX is obsolete professorw...@gmail.com 27.01.17 15:49 Re: LINUX is obsolete professorw...@gmail.com 27.01.17 15:51

nem acredito que posso mandar email aqui para o Flame mais importante do mundo da tecnologia =) parabéns AST e LBT por proporcionar isso de maneira espetacular, quase 25 anos depois.

Re: LINUX is obsolete naturist...@gmail.com 16.04.17 21:35

El miércoles, 29 de enero de 1992, 8:12:50 (UTC-4), ast  escribió:

Re: LINUX is obsolete iams...@gmail.com 18.04.17 05:24
Hi All,

it is my honor to read this mail theard. This is the history.  This is Awarsome.

Thanks,
Sumesh KS.

unk...@googlegroups.com 25.04.17 09:05 <Diese Nachricht wurde gelöscht.> unk...@googlegroups.com 25.04.17 09:08 <Diese Nachricht wurde gelöscht.> Re: LINUX is obsolete rhue...@gmail.com 25.04.17 09:56

Regards from 2017!
How absolutely silly and ignorant this looks 25 years in the future.
Although, it goes to show that even 25 years ago the opinions of highly-educated CNN-loving "professors" were worthless.

Re: LINUX is obsolete Michael Black 25.04.17 10:48
On Tue, 25 Apr 2017, rhue...@gmail.com wrote:

> Regards from 2017!

> How absolutely silly and ignorant this looks 26 years in the future.
> Although, it goes to show that even 26 years later the opinions of highly-educated CNN-loving "professors" can be less than worthless.
>
But save us from the idiots who think it's okay to vandalize a historic
thread, and spews the same message three times because he thinks it's not
working.

This vandalizing has been going on for almost 15 years, nothing like being
so "smart" as to do what others have been doing for those 15 years.  Maybe
the first time it was amusing, but all this time later, it's not.

   Michael

Re: LINUX is obsolete rhue...@gmail.com 25.04.17 14:06

Never said I was smart, bro. So many years of hindsight disqualify that. I deleted because I needed to edit my post but saw no 'edit' function. Maybe there is one but I'm too foolish to see it, oh well.

Re: LINUX is obsolete anthon...@gmail.com 21.05.17 14:25

El miércoles, 29 de enero de 1992, 7:12:50 (UTC-5), ast  escribió:
Hi Linux

Re: LINUX is obsolete bunne...@gmail.com 27.07.17 15:27

Aged like fine wine

Re: LINUX is obsolete bara...@gmail.com 08.09.17 06:22

linux master race 2017

Re: LINUX is obsolete dip...@gmail.com 16.11.17 05:55 Re: LINUX is obsolete rese...@gmail.com 08.02.18 08:38

VOCES SAO FODAS

Re: LINUX is obsolete defin...@gmail.com 16.03.18 19:54
Linux vs. Minix.

Some very fine people on both sides.

unk...@googlegroups.com 10.04.18 07:50 <Diese Nachricht wurde gelöscht.> Re: LINUX is obsolete ibe...@gmail.com 10.04.18 07:53

Personally, I'm still banking on CP/M

Let's block ads! (Why?)

Read the whole story
pbouwdewijn
40 days ago
reply
Share this story
Delete

Go: The Good, the Bad and the Ugly

1 Share

This is an additional post in the “Go is not good” series. Go does have some nice features, hence the “The Good” part in this post, but overall I find it cumbersome and painful to use when we go beyond API or network servers (which is what it was designed for) and use it for business domain logic. But even for network programming, it has a lot of gotchas both in its design and implementation that make it dangerous under an apparent simplicity.

What motivated this post is that I recently came back to using Go for a side project. I used Go extensively in my previous job to write a network proxy (both http and raw tcp) for a SaaS service. The network part was rather pleasant (I was also discovering the language), but the accounting and billing part that came with it was painful. As my side project was a simple API I thought using Go would be the right tool to get the job done quickly, but as we know many projects grow beyond their initial scope, so I had to write some data processing to compute statistics and the pains of Go came back. So here's my take on Go woes.

Some background: I love statically typed languages. My first significant programs were written in Pascal. I then used Ada and C/C++ when I started working in the early 90's. I later moved to Java and finally Scala (with some Go in between) and recently started learning Rust. I've also written a substantial amount of JavaScript, because up to recently it was the only language available in web browsers. I feel insecure with dynamically typed languages and try to limit their use to simple scripting. I'm comfortable with imperative, functional and object oriented approaches.

This is a long post, so here's the menu to whet your appetite:

The Good

Go is easy to learn

That's a fact: if you know any kind of programming language, you can learn most of Go's syntax in a couple of hours with the "Tour of Go", and write your first real program in a couple of days. Read and digest Effective Go, wander around in the standard library, play with a web toolkit like Gorilla or Go kit and you'll be a pretty decent Go developer.

This is because Go's overarching goal is simplicity. When I started learning Go it reminded me when I first discovered Java: a simple language and a rich but not bloated standard library. Learning Go was a refreshing experience coming from today's Java heavy environment. Because of Go's simplicity, Go programs are very readable, even if error handling adds quite some noise (more on this below).

But this may be false simplicity though. Quoting Rob Pike, simplicity is complicated, and we will see below that behind it there are a lot of gotchas waiting to bite us, and that simplicity and minimalism prevent writing DRY code.

Easy concurrent programming with goroutines and channels

Goroutines are probably the best feature of Go. They're lightweight computation threads, distinct from operating system threads.

When a Go program executes what looks like a blocking I/O operation, the Go runtime actually suspends the goroutine and resumes it when an event indicates that some result is available. In the meantime other goroutines have been scheduled for execution. We therefore have the scalability benefits of asynchronous programming with a synchronous programming model.

Goroutines are also lightweight: their stack grows and shrinks on demand, which means having 100s or even 1000s of goroutines is not a problem.

I once had a goroutine leak in an application: these goroutines were waiting for a channel to be closed before ending, and that channel was never closed (a common deadlock issue). The process was eating 90% of the CPU for no reason, and inspecting expvars showed 600k idle goroutines! I guess the CPU was used by the goroutine scheduler.

Sure, an actor system like Akka can handle millions of actors without a sweat, in part because actors don't have a stack, but they're far from being as easy to use as goroutines to write heavily concurrent request/response applications (i.e. http APIs).

Channels are how goroutines should communicate: they provide a convenient programming model to send and receive data between goroutines without having to rely on fragile low level synchronization primitives. Channels come with their own set of usagepatterns.

Channels have to be thought out carefully though, as incorrectly sized channels (they're unbuffered by default) can lead to deadlocks. We will also see below that using channels doesn't prevent race conditions because Go lacks immutability.

Great standard library

The Go standard library is really great, particularly for everything related to network protocols or API development: http client and server, crypto, archive formats, compressions, sending email, etc. There's even an html parser and a rather powerful templating engine to produce text & html with automatic escaping to avoid XSS (used for example by Hugo).

The various APIs are generally simple and easy to understand. They can sometimes look simplistic though: this is in part because the goroutine programming model means we just have to care about "seemingly synchronous" operations. This is also because a few versatile functions can also replace a lot of specialized ones as I found out recently for time calculations.

Go is performant

Go compiles to a native executable. Many users of Go come from Python, Ruby or Node.js. For them, this is a mind-blowing experience as they see a huge increase in the number concurrent requests a server can handle. This is actually pretty normal when you come from interpreted languages with either no concurrency (Node.js) or a global interpreter lock. Combined to the language simplicity, this explains part of the excitement for Go.

Compared to Java however, things are not so clear in raw performance benchmarks. Where Go beats Java though, is on memory usage and garbage collection.

Go's garbage collector is designed to prioritize latency and avoid stop-the-world pauses, which is particularly important in servers. This may come with a higher CPU cost, but in a horizontally scalable architecture this is easily solved by adding more machines. Remember that Go was designed at Google, who are all but short on resources!

Compared to Java, the Go GC also has less work to do: a slice of structs is a contiguous array of structures, and not an array of pointers like in Java. Similarly Go maps use small arrays as buckets for the same purpose. This means less work for the GC, and also better CPU cache locality.

Go also beats Java for command-line utilities: being a native executable, a Go program has no startup cost contrarily to Java that first has to load and compile bytecode.

Language defined source code format

Some of the most heated debates in my career happened around the definition of a code format for the team. Go solves this by defining a canonical format for Go code. The gofmt tool reformats your code and has no options.

Like it or not, gofmt defines how Go code should be formatted and that problem is therefore solved once for all!

Standardized test framework

Go comes with a great test framework in its standard library. It has support for parallel testing, benchmarks, and contains a lot of utilities to easily test network clients and servers.

Go programs are great for operations

Compared to Python, Ruby or Node.js, having to install a single executable file is a dream for operations engineers. This is less and less an issue with the growing use of Docker, but standalone executables also means tiny Docker images.

Go also has some built-in observability features with the expvar package to publish internal statuses and metrics, and makes it easy to add new ones. Be careful though, as they are automatically exposed, unprotected, on the default http request handler. Java has JMX for a similar purposes, but it's much more complex.

Defer statement, to avoid forgetting to clean up

The defer statement serves a purpose similar to finally in Java: execute some clean up code at the end of the current function, no matter how this function is exited. The interesting thing with defer is that it's not linked to a block of code, and can appear at any time. This allows the clean up code to be written as close as possible to the code that creates what needs to be cleaned up:

file, err := os.Open(fileName)
if err != nil {
    return
}
defer file.Close()

// use file, we don't have to think about closing it anymore

Sure, Java's try-with-resource is less verbose and Rust automatically claims resources when their owner is dropped, but since Go requires you to be explicit about resource clean up, having it close to resource allocation is nice.

New types

I love types, and something that irritates/scares me is when for example we pass around persisted object identifiers as string or long everywhere. We usually encode the id's type in the parameter name, but this is a cause of subtle bugs when a function has several identifiers as parameters and some call mismatches parameter order.

Go has first-class support for new types, i.e. types that take an existing type and give it a separate identity, distinct from the original one. Contrarily to wrapping, new types have no runtime overhead. This allows the compiler to catch this kind of mistake:

type UserId string // <-- new type
type ProductId string

func AddProduct(userId string, productId string) {}

func main() {
    userId := UserId("some-user-id")
    productId := ProductId("some-product-id")

    // Right order: all fine
    AddProduct(userId, productId)

    // Wrong order: would compile with raw strings
    AddProduct(productId, userId)
    // Compilation errors:
    // cannot use productId (type ProductId) as type UserId in argument to AddProduct
    // cannot use userId (type UserId) as type ProductId in argument to AddProduct
}

Unfortunately the lack of generics makes the use of new types cumbersome as writing reusable code for them requires to cast values to/from the original type.

The Bad

Go ignored advances in modern language design

In Less is exponentially more, Rob Pike explains that Go was meant to replace C and C++ at Google, and that its precursor was Newsqueak, a language he wrote in the 80's. Go also has a lot of references to Plan9, a distributed operating system the authors of Go developed in the 80's at Bell Labs.

There's even a Go assembly directly inspired from Plan9. Why not using LLVM that would have provided a wide range of target architectures out of the box? I may also be missing something here, but why is that needed? If you need to write assembly to get the most out of the CPU, shouldn't you use directly the target CPU assembly language?

Go creators deserve a lot of respect, but it looks like Go's design happened in a parallel universe (or their Plan9 lab?) where most of what happened in compilers and programming language design in the 90's and 2000's never happened. Or that Go was designed by system programmers who were also able to write a compiler.

Functional programming? No mention of it. Generics? You don't need them, look at the mess they produced in C++! Even if slice, map and channels are generic types as we'll see below.

Go's goal was to replace C and C++, and it's apparent that its creators didn't look much elsewhere. They missed their target though, as C and C++ developers at Google didn't adopt it. My guess is that the primary reason is the garbage collector. Low level C developers fiercely reject managed memory as they have no control on what happens and when. They like this control, even if it comes with additional complexity and opens the door to memory leaks and buffer overflows. Interestingly, Rust has taken a completely different approach with automatic memory management without a GC.

Go instead attracted users of scripting languages like Python and Ruby in the area of operation tools. They found in Go a way to have great performance and reduced memory/cpu/disk footprint. And more static typing too, which was new to them. The killer app for Go was Docker, that triggered its wide adoption in the devops world. The rise of Kubernetes strengthens this trend.

Interfaces are structural types

Go interfaces are like Java interfaces or Scala & Rust traits: they define behaviour that is later implemented by a type (I won't call it "class" here).

Unlike Java interfaces and Scala & Rust traits though, a type doesn't need to explicitly specify that it implements an interface: it just has to implement all functions defined in the interface. So Go interfaces are actually structural typing.

We may think that this is to allow interface implementations in other packages than the type they apply to, like class extensions that exist in Scala or Kotlin, or Rust traits, but this isn't the case: all methods related to a type must be defined in the type's package.

Go isn't the only language to use structural typing, but I find it has several drawbacks:

  • finding what types implement a given interface is hard as it relies on function definition matching. I often discover interesting implementations in Java or Scala by searching for classes that implement an interface.

  • when adding a method to an interface, you will find what types need to be updated only when they are used as values of this interface type. This can go unnoticed for quite some time. Go recommends to have tiny interfaces with very few methods, which is a way to prevent this.

  • a type may unknowingly implement an interface because it as the corresponding methods. But being accidental, the semantics of the implementation may be different from what is expected from the interface contract.

Update: for some ugliness with interfaces, see nil interface values below.

No enumerations

Go doesn't have enums, and in my opinion it's a missed opportunity.

There is iota to quickly generate auto-incrementing values, but it looks more like a hack than a feature. And a dangerous one, actually, since inserting a line in a series of iota-generated constants will change the value of the following ones. Since the generated value is the one that is used throughout the code, this can lead to interesting (not!) surprises.

This also means there is no way in Go to have the compiler check that a switch statement is exhaustive, and no way to describe the allowed values in a type.

The := / var dilemma

Go provides two ways to declare a variable and assign it a value: var x = "foo" and x := "foo". Why is that?

The main differences are that var allows declaration without initialization (and you then have to declare the type), like in var x string, whereas := requires assignment and allows a mix of existing and new variables. My guess is that := was invented to make error handling a bit less painful:

With var:

var x, err1 = SomeFunction()
if (err1 != nil) {
  return nil
}

var y, err2 = SomeOtherFunction()
if (err2 != nil) {
  return nil
}

With :=:

x, err := SomeFunction()
if (err != nil) {
  return nil
}

y, err := SomeOtherFunction()
if (err != nil) {
  return nil
}

The := syntax also easily allows to accidentally shadow a variable. I was caught more than once by this, as := (declare and assign) is too close too = (assign), as shown below:

foo := "bar"
if someCondition {
  foo := "baz"
  doSomething(foo)
}
// foo == "bar" even if "someCondition" is true

Zero values that panic

Go doesn't have constructors. Because of that, it insists on the fact that the "zero value" should be readily usable. This is an interesting approach, but in my opinion the simplification it brings is mostly for the language implementors.

In practice, many types can't do useful things without proper initialization. Let's look a the io.File object that is taken as an example in Effective Go:

type File struct {
    *file // os specific
}

func (f *File) Name() string {
    return f.name
}

func (f *File) Read(b []byte) (n int, err error) {
    if err := f.checkValid("read"); err != nil {
        return 0, err
    }
    n, e := f.read(b)
    return n, f.wrapErr("read", e)
}

func (f *File) checkValid(op string) error {
    if f == nil {
        return ErrInvalid
    }
    return nil
}

What can we see here?

  • Calling Name() on a zero-value File will panic, because its file field is nil.

  • The Read function, and pretty much every otherFile method, starts by checking if the file was initialized.

So basically a zero-value File is not only useless, but can lead to panics. You have to use one of the constructor functions like Open or Create. And checking proper initialization is an overhead you have to pay at every function call.

There are countless types like this one in the standard library, and some that don't even try to do something useful with their zero value. Call any method on a zero-value html.Template: they all panic.

And there is also a serious gotcha with map's zero value: you can query it, but storing something in it will panic:

var m1 = map[string]string{} // empty map
var m0 map[string]string     // zero map (nil)

println(len(m1))   // outputs '0'
println(len(m0))   // outputs '0'
println(m1["foo"]) // outputs ''
println(m0["foo"]) // outputs ''
m1["foo"] = "bar"  // ok
m0["foo"] = "bar"  // panics!

This requires to be careful when a structure has a map field, since it has to be initialized before adding entries to it.

So, as a developer, you have to constantly check if a structure you want to use requires to call a constructor function or if the zero value is useful. This is a lot of burden put on code writers for some simplifications in the language.

Go doesn't have exceptions. Oh wait... it does!

The blog post "Why Go gets exceptions right" explains in detail why exceptions are bad, and why the Go approach to require returning error is better. I can agree with that, and exceptions are hard to deal with when using asynchronous programming or a functional style like Java streams (let's put aside that the former isn't necessary in Go thanks to goroutines and the latter is simply not possible). The blog post mentions panic as "always fatal to your program, game over", which is fine.

Now "Defer, panic and recover" that predates it, explains how to recover from panics (by actually catching them), and says "For a real-world example of panic and recover, see the json package from the Go standard library".

And indeed, the json decoder has a common error handling function that just panics, the panic being recovered in the top-level unmarshall function that checks the panic type and returns it as an error if it's a "local panic" or re-panics the error otherwise (losing the original panic's stacktrace on the way).

To any Java developer this definitely looks like a try / catch (DecodingException ex). So Go does have exceptions, uses them internally but tells you not to.

Fun fact: a non-googler fixed the json decoder a couple of weeks ago to using regular errors bubbling up.

The Ugly

The dependency management nightmare

Let's start by quoting Jaana Dogan (aka JBD), a well known gopher at Google, who recently vented her frustration on Twitter:

Let's put it simply: there is no dependency management in Go. All current solutions are just hacks and workarounds.

This goes back its origins at Google, which famously uses a giant monolithic repository for all their source code. No need for module versioning, no need for 3rd party modules repositories, you build everything from your current branch. Unfortunately this doesn't work in the open Internet.

Adding a dependency in Go means cloning that dependency's source code repo in your GOPATH. What version? The current master branch at the time of cloning, whatever it contains. What if different projects need different versions of a dependency? They can't. The notion of "version" doesn't even exist.

Also, your own project has to live in GOPATH or the compiler won't find it. Want to have your projects cleanly organized in a separate directory? You have to hack per-project GOPATH or fiddle with symbolic links.

The community has developed workarounds with a large number of tools. Package management tools introduced vendoring and lock files holding the Git sha1 of whatever you cloned, to provide reproducible builds.

Finally in Go 1.6 the vendor directory was officially supported. But it's about vendoring what you cloned, and still not proper version management. No answer to conflicting imports from transitive dependencies that are usually solved with semantic versioning.

Things are getting better though: dep, the official dependency management tool was recently introduced to support vendoring. It supports versions (git tags) and has a version solver that follows semantic versioning conventions. It's not stable yet, but goes in the right direction. It still requires your project to live in GOPATH though.

But dep may not live long though as vgo, also from Google, wants to bring versioning in the language itself and has been making some waves lately.

So dependency management in Go is nightmarish. It's painful to setup, and you don't think about it while developing until it blows up when you add a new import or simply want to pull a branch of one of your team members in your GOPATH...

Let's go back to the code now.

Mutability is hardcoded in the language

There is no way to define immutable structures in Go: struct fields are mutable and the const keyword doesn't apply to them. Go makes it easy however to copy an entire struct with a simple assignment, so we may think that passing arguments by value is all that is needed to have immutability at the cost of copying.

However, and unsurprisingly, this does not copy values referenced by pointers. And the since built-in collections (map, slice and array) are references and are mutable, copying a struct that contains one of these just copies the pointer to the same underlying memory.

The example below illustrates this:

type S struct {
    A string
    B []string
}

func main() {
    x := S{"x-A", []string{"x-B"}}
    y := x // copy the struct
    y.A = "y-A"
    y.B[0] = "y-B"

    fmt.Println(x, y)
    // Outputs "{x-A [y-B]} {y-A [y-B]}" -- x was modified!
}

So you have to be extremely careful about this, and not assume immutability if you pass a parameter by value.

There are some deepcopy libraries that attempt to solve this using (slow) reflection, but they fall short since private fields can't be accessed with reflection. So defensive copying to avoid race conditions will be difficult, requiring lots of boilerplate code. Go doesn't even have a Clone interface that would standardize this.

Slice gotchas

Slices come with many gotchas: as explained in "Go slices: usage and internals", re-slicing a slice doesn't copy the underlying array for performance reasons. This is a laudable goal but means that sub-slices of a slice are just views that follow the mutations of the original slice. So don't forget to copy() a slice if you want to separate it from its origin.

Forgetting to copy() becomes more dangerous with the append function: appending values to a slice resizes the underlying array if it doesn't have enough capacity to hold the new values. This means that the result of append may or may not point to the original array depending on its initial capacity. This can cause hard to find non deterministic bugs.

In the code below we see that the effects of a function appending values to a sub-slice vary depending on the capacity of the original slice:

func doStuff(value []string) {
    fmt.Printf("value=%v\n", value)

    value2 := value[:]
    value2 = append(value2, "b")
    fmt.Printf("value=%v, value2=%v\n", value, value2)

    value2[0] = "z"
    fmt.Printf("value=%v, value2=%v\n", value, value2)
}

func main() {
    slice1 := []string{"a"} // length 1, capacity 1

    doStuff(slice1)
    // Output:
    // value=[a] -- ok
    // value=[a], value2=[a b] -- ok: value unchanged, value2 updated
    // value=[a], value2=[z b] -- ok: value unchanged, value2 updated

    slice10 := make([]string, 1, 10) // length 1, capacity 10
    slice10[0] = "a"

    doStuff(slice10)
    // Output:
    // value=[a] -- ok
    // value=[a], value2=[a b] -- ok: value unchanged, value2 updated
    // value=[z], value2=[z b] -- WTF?!? value changed???
}

Mutability and channels: race conditions made easy

Go concurrency is built on CSP using channels which make coordinating goroutines much simpler and safer than synchronizing on shared data. The mantra here is "Do not communicate by sharing memory; instead, share memory by communicating". This is wishful thinking however and cannot be achieved safely in practice.

As we saw above there is no way in Go to have immutable data structures. This means that once we send a pointer on a channel, it's game over: we share mutable data between concurrent processes. Of course a channel of structures (and not pointers) copies the values sent on the channel, but as we saw above, this doesn't deep-copy references, including slices and maps, which are intrinsically mutable. Same goes with struct fields of an interface type: they are pointers, and any mutation method defined by the interface is an open door to race conditions.

So although channels apparently make concurrent programming easy, they don't prevent race conditions on shared data. And the intrinsic mutability of slices and maps makes them even more likely to happen.

Talking about race conditions, Go includes a race condition detection mode, which instruments the code to find unsynchronized shared access. It can only detect race problems when they happen though, so mostly during integration or load tests, hoping those will exercise the race condition. It cannot realistically be enabled in production because of it's high runtime cost, except for temporary debug sessions.

Noisy error management

Something you will learn quickly in Go is error the error handling pattern, repeated ad nauseam:

someData, err := SomeFunction()
if err != nil {
    return err;
}

Because Go claims to not support exceptions (although it does), every function that can end up with an error must have an error as its last result. This applies in particular to every function that performs some I/O, so this verbose pattern is extremely prevalent in network applications, which is Go's primary area.

Your eye will quickly develop a visual filter for this pattern and identify it as "yeah, error handling", but still it's a lot of noise and it's sometimes hard to find the actual code in the middle of error handling.

There are a couple of gotchas though, since an error result can actually be a nominal case, as for example when reading from the ubiquitous io.Reader:

len, err := reader.Read(bytes)
if err != nil {
    if err == io.EOF {
        // All good, end of file
    } else {
        return err
    }
}

In "Error has values" Rob Pike suggests some strategies to reduce error handling verbosity. I find them to be actually dangerous band-aid:

type errWriter struct {
    w   io.Writer
    err error
}

func (ew *errWriter) write(buf []byte) {
    if ew.err != nil {
        return // Write nothing if we already errored-out
    }
    _, ew.err = ew.w.Write(buf)
}

func doIt(fd io.Writer) {
    ew := &errWriter{w: fd}
    ew.write(p0[a:b])
    ew.write(p1[c:d])
    ew.write(p2[e:f])
    // and so on
    if ew.err != nil {
        return ew.err
    }
}

Basically this recognizes that checking errors all the time is painful, and provides a pattern to just ignore errors in a write sequence until its end. So any operation performed to feed the writer once it has errored-out is executed even if we know it shouldn't. What if these are more expensive than just getting a slice? We've just wasted resources because Go's error handling is a pain.

Rust had a similar issue: by not having exceptions (really not, contrarily to Go), functions that can fail return Result and require some pattern matching on the result. So Rust 1.0 came with the try! macro and, recognizing the pervasiveness of this pattern, made it a first-class language feature. So you have the terseness of the above code while keeping a correct error-handling.

Transposing Rust's approach to Go is unfortunately not possible because Go doesn't have generics nor macros.

Nil interface values

This is an update after redditor jmickeyd shows a weird behaviour of nil and interfaces, that definitely qualifies as ugly. I expanded it a bit:

type Explodes interface {
    Bang()
    Boom()
}

// Type Bomb implements Explodes
type Bomb struct {}
func (*Bomb) Bang() {}
func (Bomb) Boom() {}

func main() {
    var bomb *Bomb = nil
    var explodes Explodes = bomb
    println(bomb, explodes) // '0x0 (0x10a7060,0x0)'
    if explodes != nil {
        explodes.Bang() // works fine
        explodes.Boom() // panic: value method main.Bomb.Boom called using nil *Bomb pointer
    }
}

The above code verifies that explodes is not nil and yet the code panics in Boom but not in Bang. Why is that? The explanation is in the println line: the bomb pointer is 0x0 which is effectively nil, but explodes is the non-nil (0x10a7060,0x0).

The first element of this pair is the pointer to the method dispatch table for the implementation of the Bomb interface by the Explodes type, and the second element is the address of the actual Explodes object, which is nil.

The call to Bang succeeds because it applies to pointers to a Bomb: there is no need to dereference the pointer to call the method. The Boom method acts on a value and so a call causes pointers to be dereferenced, which causes a panic.

Note that if we had written var explodes Explodes = nil, then != nil would have not succeeded.

So how should we write the test in a safe way? We have to nil-check both the interface value and if non-nil, check the value pointed to by the interface object... using reflection!

if explodes != nil && !reflect.ValueOf(explodes).IsNil() {
    explodes.Bang() // works fine
    explodes.Boom() // works fine
}

Bug or feature? The Tour of Go has a dedicated page to explain this behaviour and clearly says "Note that an interface value that holds a nil concrete value is itself non-nil".

Still, this is ugly and can cause very subtle bugs. It looks to me a like a big flaw in the language design to make its implementation easier.

Struct field tags: runtime DSL in a string

If you've used JSON in Go, you've certainly encountered something similar:

type User struct {
    Id string    `json:"id"`
    Email string `json:"email"`
    Name string  `json:"name,omitempty"`
}

These are struct tags which the language spec says are a string "made visible through a reflection interface and take part in type identity for structs but are otherwise ignored". So basically, put whatever you want in this string, and parse it at runtime using reflection. And panic at runtime if the syntax isn't right.

This string is actually field metadata, something that has existed for decades in many languages as "annotations" or "attributes". With language support, their syntax is formally defined and checked at compile time, while still being extensible.

Why did Go decide to use a raw string that any library can decide to use with whatever DSL it wants, parsed at run time?

Things can get awkward when you use multiple libraries: here's an example taken out of Protocol Buffer's Go documentation:

type Test struct {
    Label         *string             `protobuf:"bytes,1,req,name=label" json:"label,omitempty"`
    Type          *int32              `protobuf:"varint,2,opt,name=type,def=77" json:"type,omitempty"`
    Reps          []int64             `protobuf:"varint,3,rep,name=reps" json:"reps,omitempty"`
    Optionalgroup *Test_OptionalGroup `protobuf:"group,4,opt,name=OptionalGroup" json:"optionalgroup,omitempty"`
}

Side note: why are these tags so common when using JSON? Because in Go public fields must use UpperCamelCase, or at least start with an uppercase letter, whereas the common convention for naming fields in JSON is either lowerCamelCase or snake_case. Hence the need for tedious tagging.

The standard JSON encoder/decoder doesn't allow providing a naming strategy to automate the conversion, like Jackson does in Java. This probably explains why all fields in the Docker APIs are UpperCamelCase: that avoided its developers to write these unwieldy tags for their large API.

No generics... at least not for you

It's hard to conceive a modern statically typed language without generics, but this is what you get with Go: it has no generics... or more precisely almost no generics, which as we'll see makes it worse than no generics at all.

The built-in slice, map, array and channel are generic. Declaring a map[string]MyStruct clearly shows the use of a generic type that has two parameters. Which is nice, as it allows type safe programming that catches all sorts of errors.

There are however no user-definable generic data structures. This means that you cannot define reusable abstractions that may work with any type, in a type-safe way. You have to use untyped interface{} and cast values to the proper type. Any mistake will only be catched at run time and will result in a panic. For a Java developer, it's like going back in the pre-Java 5 times, in 2004.

In "Less is exponentially more", Rob Pike surprisingly puts generics and inheritance in the same "typed programming" bag and says he favors composition over inheritance. Not liking inheritance is fine (I actually write a lot of Scala with little inheritance) but generics answer another concern: reusability while preserving type safety.

As we'll see below, the segregation between built-ins with generics and user-defined without generics has consequences on more than developer "comfort" and compile-time type safety: it impacts the whole Go ecosystem.

Go has few data structures beyond slice and map

The Go ecosystem doesn't have many data structures that provide added or different functionality from the built-in slice and map. Recent versions of Go added the containers package that provides a few of them. They all have the same caveat: they deal with interface{} values, meaning you lose all type safety.

Let's see an example with sync.Map which is a concurrent map with lower thread contention than guarding a regular map with a mutex:

type MetricValue struct {
    Value float64
    Time time.Time
}

func main() {
    metric := MetricValue{
        Value: 1.0,
        Time: time.Now(),
    }

    // Store a value

    m0 := map[string]MetricValue{}
    m0["foo"] = metric

    m1 := sync.Map{}
    m1.Store("foo", metric) // not type-checked

    // Load a value and print its square

    foo0 := m0["foo"].Value // rely on zero-value hack if not present
    fmt.Printf("Foo square = %f\n", math.Pow(foo0, 2))

    foo1 := 0.0
    if x, ok := m1.Load("foo"); ok { // have to make sure it's present (not bad, actually)
        foo1 = x.(MetricValue).Value // cast interface{} value
    }
    fmt.Printf("Foo square = %f\n", math.Pow(foo1, 2))

    // Sum all elements

    sum0 := 0.0
    for _, v := range m0 { // built-in range iteration on map
        sum0 += v.Value
    }
    fmt.Printf("Sum = %f\n", sum0)

    sum1 := 0.0
    m1.Range(func(key, value interface{}) bool { // no 'range' for you! Provide a function
        sum1 += value.(MetricValue).Value        // with untyped interface{} parameters
        return true // continue iteration
    })
    fmt.Printf("Sum = %f\n", sum1)
}

This is a great illustration of why there aren't many data structures in the Go ecosystem: they are a pain to use compared to the built-in slice and map. And for a simple reason: there are two categories of data structures in Go:

  • aristocracy, the built-in slice, map, array and channel: type safe and generic, convenient to use with range,
  • rest of the world written in Go code: can't provide type safety, clumsy to use because of required casts.

So library-defined data structures really have to provide solid benefits for us developers to be willing to pay the price of loosing type safety and the additional code verbosity.

The duality between built-in structures and Go code is painful in more subtle ways when we want to write reusable algorithms. This is an example from the standard library's sort package to sort a slice:

import "sort"

type Person struct {
    Name string
    Age  int
}

// ByAge implements sort.Interface for []Person based on the Age field.
type ByAge []Person

func (a ByAge) Len() int           { return len(a) }
func (a ByAge) Swap(i, j int)      { a[i], a[j] = a[j], a[i] }
func (a ByAge) Less(i, j int) bool { return a[i].Age < a[j].Age }

func SortPeople(people []Person) {
    sort.Sort(ByAge(people))
}

Wait... Seriously? We have to define a new type ByAge that has to implement 3 methods to bridge a generic (in the sense of "reusable") sort algorithm and the typed slice.

The only thing that should matter to us, developers, is the Less function that compares two objects and is domain-dependent. Everything else is noise and boilerplate required by the simple fact that Go has no generics. And we have to repeat it for each and every type that we want to sort. And every comparator too.

Update: Michael Stapelberg points me to sort.Slice that I missed. Looks better, although it uses reflection under the hood (eek!) and requires the comparator function to be a closure on the slice to sort, which is still ugly.

Every text explaining that Go doesn't need generics shows this as "the Go way" that allows having reusable algorithms while avoiding downcasting to interface{}...

Ok. Now to ease the pain, it would be nice if Go had macros that could generate this nonsensical boilerplate, right?

go generate: ok-ish, but...

Go 1.4 introduced the go generate command to trigger code generation from annotations in the source code. Well, "annotation" here actually means a magic //go:generate comment with strict rules: "the comment must start at the beginning of the line and have no spaces between the // and the go:generate". Get it wrong, add a space and no tool will warn you about it.

This actually covers two kinds of use cases:

  • Generating Go code from other sources: ProtoBuf / Thrift / Swagger schemas, language grammars, etc.

  • Generating Go code that complements existing code, such as stringer given as an example, that generates a String() method for a series of typed constants.

First use case is ok, and the added value is that you don't have to fiddle with Makefiles and that generation instructions can be close to the generated code's usage.

For the second use case, many languages, such as Scala & Rust, have macros (which are mentioned in the design document) that have access to the source code's AST during compilation. Stringer actually imports the Go compiler's parser to traverse the AST. Java doesn't have macros but annotation processors play the same role.

Many languages also don't support macros so nothing fundamentally wrong here, except this fragile comment-driven syntax, which looks again like a quick hack that somehow does the job, and not something that was carefully thought out as coherent language design.

Oh, and did you know that the Go compiler actually has a number of annotations/pragmas and conditional compilation using this fragile comment syntax?

Conclusion

As you probably guessed, I have a love/hate relation with Go. Go is a bit like this friend that you like to hang out with because he's fun and great for small talk around beers, but that you find boring or painful when you want to have deeper conversations, and that you don't want to go on vacation with.

I like the simplicity of Go for writing efficient APIs or network stuff that goroutines make easy to reason about, I hate its limited expressiveness when I have to implement business logic, and I hate all its quirks and gotchas waiting to hit you hard.

Up to recently there wasn't really an alternative in the space that Go occupies, which is developing efficient native executables without incurring the pain of C or C++. Rust is progressing quickly, and the more I play with it, the more I find it extremely interesting and superbly designed. I have the feeling that Rust is one of those friends that take some time to get along with, but that you'll finally want to engage with for a long term relationship.

Going back on more technical aspects, you'll find articles saying that Rust and Go don't play in the same park, that Rust is a systems language because it doesn't have a GC, etc. I think this is becoming less and less true. Rust is climbing higher in the stack with great web frameworks and nice ORMs. It also gives you that warm feeling of "if it compiles, errors will come from the logic I wrote, not language quirks I forgot to pay attention to".

We also see some interesting movements in the container/service mesh area with the efficient Sozu proxy written in Rust, or Buoyant (developers of Linkerd) developing their new Kubernetes service mesh Conduit as a combination of Go for the control plane (I guess because of the available Kubernetes libraries) and Rust for the data plane for its efficiency and robustness.

Swift is also part of this family or recent alternatives to C and C++. Its ecosystem is still too Apple-centric though, even if it's now available on Linux and has emerging server-side APIs and the Netty framework.

There is of course no silver bullet and no one-size-fits-all. But knowing the gotchas of the tools you use is important. I hope this blog post has taught you some things about Go that you weren't aware of, so that you avoid the traps rather than getting caught!

Let's block ads! (Why?)

Read the whole story
pbouwdewijn
99 days ago
reply
Share this story
Delete

1.1.1.1: Fast, privacy-first consumer DNS service

1 Share

Cloudflare's mission is to help build a better Internet. We're excited today to take another step toward that mission with the launch of 1.1.1.1 — the Internet's fastest, privacy-first consumer DNS service. This post will talk a little about what that is and a lot about why we decided to do it. (If you're interested in the technical details on how we built the service, check out Ólafur Guðmundsson's accompanying post.)

Quick Primer On DNS

DNS is the directory of the Internet. Whenever you click on a link, send an email, open a mobile app, often one of the first things that has to happen is your device needs to look up the address of a domain. There are two sides of the DNS network: Authoritative (the content side) and Resolver (the consumer side).

Every domain needs to have an Authoritative DNS provider. Cloudflare, since our launch in September 2010, has run an extremely fast and widely-used Authoritative DNS service. 1.1.1.1 doesn't (directly) change anything about Cloudflare's Authoritative DNS service.

On the other side of the DNS system are resolvers. Every device that connects to the Internet needs a DNS resolver. By default, these resolvers are automatically set by whatever network you're connecting to. So, for most Internet users, when they connect to an ISP, or a coffee shop wifi hot spot, or a mobile network then the network operator will dictate what DNS resolver to use.

DNS's Privacy Problem

The problem is that these DNS services are often slow and not privacy respecting. What many Internet users don't realize is that even if you're visiting a website that is encrypted — has the little green lock in your browser — that doesn't keep your DNS resolver from knowing the identity of all the sites you visit. That means, by default, your ISP, every wifi network you've connected to, and your mobile network provider have a list of every site you've visited while using them.

Network operators have been licking their chops for some time over the idea of taking their users' browsing data and finding a way to monetize it. In the United States, that got easier a year ago when the Senate voted to eliminate rules that restricted ISPs from selling their users' browsing data. With all the concern over the data that companies like Facebook and Google are collecting on you, it worries us to now add ISPs like Comcast, Time Warner, and AT&T to the list. And, make no mistake, this isn't a US-only problem — ISPs around the world see the same privacy-invading opportunity.

DNS's Censorship Problem

But privacy concerns extend far beyond just ad targeting. Cloudflare operates Project Galileo to protect at no cost politically or artistically important organizations around the world from cyber attack. Through the project we protect groups like LGBTQ organizations targeted in the Middle East, journalists covering political corruption in Africa, human rights workers in Asia, and bloggers on the ground covering the conflict in Crimea. We're really proud of the project and we're really good at stopping cyber attacks launched at its participants.

But it's been depressing to us to watch all too frequently how DNS can be used as a tool of censorship against many of the groups we protect. While we're good at stopping cyber attacks, if a consumer's DNS gets blocked there's been nothing we could do to help.

Turkey_8.8.8.8

In March 2014, for instance, the government of Turkey blocked Twitter after recordings showing a government corruption scandal leaked online. The Internet was censored by the country's ISP's DNS resolvers blocking DNS requests for twitter.com. People literally spray painted 8.8.8.8, the IP of Google's DNS resolver service, on walls to help fellow Turks get back online. Google's DNS resolver is great, but diversity is good and we thought we could do even better.

Building a Consumer DNS Service

The insecurity of the DNS infrastructure struck the team at Cloudflare as a bug at the core of the Internet, so we set out to do something about it. Given we run one of the largest, most interconnected global networks — and have a lot of experience with DNS — we were well positioned to launch a consumer DNS service. We began testing and found that a resolver, running across our global network, outperformed any of the other consumer DNS services available (including Google's 8.8.8.8). That was encouraging.

We began talking with browser manufacturers about what they would want from a DNS resolver. One word kept coming up: privacy. Beyond just a commitment not to use browsing data to help target ads, they wanted to make sure we would wipe all transaction logs within a week. That was an easy request. In fact, we knew we could go much further. We committed to never writing the querying IP addresses to disk and wiping all logs within 24 hours.

Cloudflare's business has never been built around tracking users or selling advertising. We don't see personal data as an asset; we see it as a toxic asset. While we need some logging to prevent abuse and debug issues, we couldn't imagine any situation where we'd need that information longer than 24 hours. And we wanted to put our money where our mouth was, so we committed to retaining KPMG, the well-respected auditing firm, to audit our code and practices annually and publish a public report confirming we're doing what we said we would.

Enter 1.1.1.1

spraypainted-1.1.1.1

The one thing that was left was we needed a pair of memorable IPs. One of the core reasons for the DNS system is that IPs aren't very memorable. 172.217.10.46 isn't nearly as memorable as Google.com. But DNS resolvers inherently can't use a catchy domain because they are what have to be queried in order to figure out the IP address of a domain. It's a chicken and egg problem. And, if we wanted the service to be of help in times of crisis like the attempted Turkish coup, we needed something easy enough to remember and spraypaint on walls.

We reached out to the team at APNIC. APNIC is a Regional Internet Registery (RIR) responsible for handing out IPs in the Asia Pacific region. It is one of five RIRs that manage IP allocation globally, the other four being: ARIN (North America), RIPE (Europe/Middle East), AFRINIC (Africa), and LATNIC (South America).

APNIC's research group held the IP addresses 1.1.1.1 and 1.0.0.1. While the addresses were valid, so many people had entered them into various random systems that they were continuously overwhelmed by a flood of garbage traffic. APNIC wanted to study this garbage traffic but any time they'd tried to announce the IPs, the flood would overwhelm any conventional network.

We talked to the APNIC team about how we wanted to create a privacy-first, extremely fast DNS system. They thought it was a laudable goal. We offered Cloudflare's network to receive and study the garbage traffic in exchange for being able to offer a DNS resolver on the memorable IPs. And, with that, 1.1.1.1 was born.

Seriously, April 1?

The only question that remained was when to launch the new service? This is the first consumer product Cloudflare has ever launched, so we wanted to reach a wider audience. At the same time, we're geeks at heart. 1.1.1.1 has 4 1s. So it seemed clear that 4/1 (April 1st) was the date we needed to launch it.

1.1.1.1

Never mind that it was a Sunday. Never mind that it was on Easter and during Passover. Never mind that it was April Fools Day — a day where tech companies often trot out fictional services they think are cute while the media and the rest of the non-tech world collectively roll their eyes.

We justified it to ourselves that Gmail, another great, non-fictional consumer service, also launched on April 1, 2004. Of course, as Cloudflare's PR team has repeatedly pointed out to me in the run up to launch, the Gmail launch day was a Thursday and not on Easter. Nearly every media briefing I did this week ahead of the launch the reporter made me swear that this wasn't a joke. And it's not. I swear. And the best way to prove that is go to 1.1.1.1, follow the instructions to set it up, and see for yourself. It's real. And it's awesome.

Why Did We Build It?

The answer to why we built the service goes back to our mission: to help build a better Internet. People come to work at Cloudflare every day in order to make the Internet better, more secure, more reliable, and more efficient. It sounds cheesy, but it's true.

When, in 2014, we decided to enable encryption for free for all our customers a lot of people externally thought we were crazy. In addition to the technical and financial costs, SSL was, at the time, the primary difference between our free and paid service. And yet, it was a hard technical challenge, and clearly the right thing to do for the Internet, so we did it. And, in one day, we doubled the size of the encrypted web. I'm proud of the fact that, three and a half years later, the rest of the industry is starting to follow suit. The web should have been encrypted from the beginning. It's a bug it wasn't. We're doing what we can do fix it.

When, last year, we made DDoS mitigation free and unmetered across all our plans a lot of people again scratched their heads. But it was the right thing to do. You shouldn't have to have a big bank account to stand up to hackers and bullies online. Over time we're convinced that DDoS mitigation will be a commodity included with all platforms, so of course we should be leading the way to get to that inevitable that end.

Part of the reason we've been able to hire such a great team is that we take on big challenges like this when they're the right thing to do. Walk around the office and our team's laptops are adorned with 1.1.1.1 stickers because we're all proud of what we're doing. That alone made building this a no brainer. (PS - Sound fun? We're hiring.)

1.1.1.1-laptop

Toward a Better DNS Infrastructure

But there's more. DNS itself is a 35-year-old protocol and it's showing its age. It was never designed with privacy or security in mind. In our conversations with browser, operating system, app, and router manufacturers nearly everyone lamented that, even with a privacy-first service like 1.1.1.1, DNS inherently is unencrypted so it leaks data to anyone who's monitoring your network connection. While that's harder to monitor for someone like your ISP than if they run the DNS resolver themselves, it's still not secure.

What's needed is a move to a new, modern protocol. There are a couple of different approaches. One is DNS-over-TLS. That takes the existing DNS protocol and adds transport layer encryption. Another is DNS-over-HTTPS. It includes security but also all the modern enhancements like supporting other transport layers (e.g., QUIC) and new technologies like server HTTP/2 Server Push. Both DNS-over-TLS and DNS-over-HTTPS are open standards. And, at launch, we've ensured 1.1.1.1supports both.

We think DNS-over-HTTPS is particularly promising — fast, easier to parse, and encrypted. To date, Google was the only scale provider supporting DNS-over-HTTPS. For obvious reasons, however, non-Chrome browsers and non-Android operating systems have been reluctant to build a service that sends data to a competitor. We're hoping that with an independent DNS-over-HTTPS service now available, we'll see more experiments from browsers, operating systems, routers, and apps to support the protocol.

We have no need to be the only such service. More diversity in DNS providers is a Good Thing™. If, over time, a robust ecosystem of networks offering DNS-over-HTTPS support develops then that'll go down as one of the things we'll be proud of in furthering our mission to help build a better Internet.

Tying It All Together

DNSPerf

While DNSPerf now ranks 1.1.1.1 as the fastest DNS resolver when querying non-Cloudflare customers (averaging around 14ms globally), there's an added benefit if you're a Cloudflare customer using our Authoritative DNS. Because the resolver and the recursor are now on the same network, running on the same hardware, we can answer queries for Cloudflare's customers incredibly quickly. We can also support immediate updates, without having to wait for TTLs to expire.

In other words, every new user of 1.1.1.1 makes Cloudflare's Authoritative DNS service a bit better. And, vice versa, every new user of Cloudflare's Authoritative DNS service makes 1.1.1.1 a bit better. So, if you're an existing Cloudflare customer, encourage your users to try 1.1.1.1 and you'll see performance benefits from all those who do.

Visit https://1.1.1.1/ from any device to get started with the Internet's fastest, privacy-first DNS service.

Let's block ads! (Why?)

Read the whole story
pbouwdewijn
111 days ago
reply
Share this story
Delete
Next Page of Stories