67 stories

Choose Firefox Now, or Later You Won't Get a Choice (2014)

1 Share

I know it's not the greatest marketing pitch, but it's the truth.

Google is bent on establishing platform domination unlike anything we've ever seen, even from late-1990s Microsoft. Google controls Android, which is winning; Chrome, which is winning; and key Web properties in Search, Youtube, Gmail and Docs, which are all winning. The potential for lock-in is vast and they're already exploiting it, for example by restricting certain Google Docs features (e.g. offline support) to Chrome users, and by writing contracts with Android OEMs forcing them to make Chrome the default browser. Other bad things are happening that I can't even talk about. Individual people and groups want to do the right thing but the corporation routes around them. (E.g. PNaCl and Chromecast avoided Blink's Web standards commitments by declaring themselves not part of Blink.) If Google achieves a state where the Internet is really only accessible through Chrome (or Android apps), that situation will be very difficult to escape from, and it will give Google more power than any company has ever had.

Microsoft and Apple will try to stop Google but even if they were to succeed, their goal is only to replace one victor with another.

So if you want an Internet --- which means, in many ways, a world --- that isn't controlled by Google, you must stop using Chrome now and encourage others to do the same. If you don't, and Google wins, then in years to come you'll wish you had a choice and have only yourself to blame for spurning it now.

Of course, Firefox is the best alternative :-). We have a good browser, and lots of dedicated and brilliant people improving it. Unlike Apple and Microsoft, Mozilla is totally committed to the standards-based Web platform as a long-term strategy against lock-in. And one thing I can say for certain is that of all the contenders, Mozilla is least likely to establish world domination :-).

Let's block ads! (Why?)

Read the whole story
8 days ago
Share this story

Goodbye, EdgeHTML


Article URL: https://blog.mozilla.org/blog/2018/12/06/goodbye-edge/

Comments URL: https://news.ycombinator.com/item?id=18622516

Points: 1048

# Comments: 581

Read the whole story
8 days ago
9 days ago
Share this story

At 22 years old, Postgres might just be the most advanced database yet

1 Share

PostgreSQL Logo

As a techy, many of the debates I engage in can be boiled down to just one question: Should we pick the new thing or the proven one? As passionate as this question is discussed, there are a handful of technologies where the answer is easy: Why not both?

Postgres is such a technology. Originally released in 1996 (and effectively developed since 1982), it is now 22 years old - yet in many respects, it is the most modern database management system there is. Not only does it come with a simply mindboggling set of features, but it also transcends being a pure database and evolved into a fully programmable, integrated data environment, complete with its own programming language PL/pgSQL.

There's any number of marvels one can discuss in regards to Postgres - but for this article, I'd like to shed light on the five extraordinary features that made it our backend of choice for Arcentry:

Pub/Sub Messaging

Postgres can be used as a clusterable message broker. Granted, it doesn't come with the featureset that purpose-built solutions like RabbitMQ or Kafka offer, but its integration of event-based messaging into the wider data-context makes it extremely valuable. Arcentry's on-premise version, for instance, makes use of this pattern. We use Postgres-Messaging as the backbone for horizontally scalable deployments:

Whenever a user makes a change to any diagram, Arcentry issues a request to a server which merges the update into a binary JSON document stored in Postgres. Once the write is confirmed, a trigger emits an event that all other connected servers are subscribed to which in turn forwards the update to its active users.

This gives us an easy way to provide horizontally scalable realtime updates with strong consistency - all from a single external dependency.


Triggers are functions that run before or after data is manipulated. They are a fantastic way to build validation, transformation and derived logic directly into the database.

Triggers also present a simple way to extend existing database functionality. For example, take customers asking for an immutable audit log of changes to Arcentry's account table.

Rather than writing an additional query or service endpoint, we simply programmed a trigger into Postgres that runs whenever a row in the accounts table is altered and writes a copy of the current row, complete with timestamp and the userId that initiated the change to a separate audit table.

Foreign Data Wrappers

Sometimes, it is nice to integrate a user's existing Database into Arcentry - whether to query user accounts or to store document data within an established structure. There is - of course - any number of ways to achieve this, but a particularly convenient one are Postgres' Foreign Data Wrappers.

These are endpoints that connect Postgres to any number of other data sources, say MongoDB, Redis, MySQL or even CSV or JSON files. As far as the query statement is concerned, these sources are just regular Postgres tables that can be joined, searched, referenced and become an organic part of the database - making Postgres a powerful integration tool and potential access point for data-lake setups.


Many databases store JSON or its binary representation, JSONB - and I get that this won't spark too much excitement anymore. But Postgres' manipulation functions make JSON a first class citizen within a table based, relational database. Whether its outputting query results as a nested JSON structure or parsing JSON on the fly, Postgres can handle it beautifully.


At times, however, neither PL/pgSQL nor Triggers are enough to achieve the functionality one needs. With many other databases this would simply be the end of it - but Postgres is impressively extendable. Writing Postgres Addons is not an easy task (trust me, I tried), but fortunately, many talented engineers have done the work for me and built extensions that turn Postgres into an entirely different product.

Take for instance PostGIS that turns the Postgres server into a fully fledged spatial database for geographic information systems (GIS).

Or how about PipelineDB that turns Postgres into a timeseries store/ stream processor.

There's any number of Postgres extensions, tools and GUIs, an overview over which can be found here

A granddad-hipster

What makes all this truly stand out is the example that Postgres sets: Remaining relevant for 22 years is an almost impossible feat for any software - but keeping a strong focus and investing decades into improving, refining and optimizing an already strong core can create a piece of technology that today is as relevant as it was back in the 1990s.

Let's block ads! (Why?)

Read the whole story
10 days ago
Share this story

Firecracker – Lightweight Virtualization for Serverless Computing

1 Share

One of my favorite Amazon Leadership Principles is Customer Obsession. When we launched AWS Lambda, we focused on giving developers a secure serverless experience so that they could avoid managing infrastructure. In order to attain the desired level of isolation we used dedicated EC2 instances for each customer. This approach allowed us to meet our security goals but forced us to make some tradeoffs with respect to the way that we managed Lambda behind the scenes. Also, as is the case with any new AWS service, we did not know how customers would put Lambda to use or even what they would think of the entire serverless model. Our plan was to focus on delivering a great customer experience while making the backend ever-more efficient over time.

Just four years later (Lambda was launched at re:Invent 2014) it is clear that the serverless model is here to stay. Today, Lambda processes trillions of executions for hundreds of thousands of active customers every month. Last year we extended the benefits of serverless to containers with the launch of AWS Fargate, which now runs tens of millions of containers for AWS customers every week.

As our customers increasingly adopted serverless, it was time to revisit the efficiency issue. Taking our Invent and Simplify principle to heart, we asked ourselves what a virtual machine would look like if it was designed for today’s world of containers and functions!

Introducing Firecracker
Today I would like to tell you about Firecracker, a new virtualization technology that makes use of KVM. You can launch lightweight micro-virtual machines (microVMs) in non-virtualized environments in a fraction of a second, taking advantage of the security and workload isolation provided by traditional VMs and the resource efficiency that comes along with containers.

Here’s what you need to know about Firecracker:

Secure – This is always our top priority! Firecracker uses multiple levels of isolation and protection, and exposes a minimal attack surface.

High Performance – You can launch a microVM in as little as 125 ms today (and even faster in 2019), making it ideal for many types of workloads, including those that are transient or short-lived.

Battle-TestedFirecracker has been battled-tested and is already powering multiple high-volume AWS services including AWS Lambda and AWS Fargate.

Low OverheadFirecracker consumes about 5 MiB of memory per microVM. You can run thousands of secure VMs with widely varying vCPU and memory configurations on the same instance.

Open SourceFirecracker is an active open source project. We are already ready to review and accept pull requests, and look forward to collaborating with contributors from all over the world.

Firecracker was built in a minimalist fashion. We started with crosvm and set up a minimal device model in order to reduce overhead and to enable secure multi-tenancy. Firecracker is written in Rust, a modern programming language that guarantees thread safety and prevents many types of buffer overrun errors that can lead to security vulnerabilities.

Firecracker Security
As I mentioned earlier, Firecracker incorporates a host of security features! Here’s a partial list:

Simple Guest ModelFirecracker guests are presented with a very simple virtualized device model in order to minimize the attack surface: a network device, a block I/O device, a Programmable Interval Timer, the KVM clock, a serial console, and a partial keyboard (just enough to allow the VM to be reset).

Process Jail – The Firecracker process is jailed using cgroups and seccomp BPF, and has access to a small, tightly controlled list of system calls.

Static Linking – The firecracker process is statically linked, and can be launched from a jailer to ensure that the host environment is as safe and clean as possible.

Firecracker in Action
To get some experience with Firecracker, I launch an i3.metal instance and download three files (the firecracker binary, a root file system image, and a Linux kernel):

I need to set up the proper permission to access /dev/kvm:

$  sudo setfacl -m u:${USER}:rw /dev/kvm

I start firecracker in one PuTTY session, and then issue commands in another (the process listens on a Unix-domain socket and implements a REST API). The first command sets the configuration for my first guest machine:

$ curl --unix-socket /tmp/firecracker.sock -i \
    -X PUT "http://localhost/machine-config" \
    -H "accept: application/json" \
    -H "Content-Type: application/json" \
    -d "{
        \"vcpu_count\": 1,
        \"mem_size_mib\": 512

And, the second sets the guest kernel:

$ curl --unix-socket /tmp/firecracker.sock -i \
    -X PUT "http://localhost/boot-source" \
    -H "accept: application/json" \
    -H "Content-Type: application/json" \
    -d "{
        \"kernel_image_path\": \"./hello-vmlinux.bin\",
        \"boot_args\": \"console=ttyS0 reboot=k panic=1 pci=off\"

And, the third one sets the root file system:

$ curl --unix-socket /tmp/firecracker.sock -i \
    -X PUT "http://localhost/drives/rootfs" \
    -H "accept: application/json" \
    -H "Content-Type: application/json" \
    -d "{
        \"drive_id\": \"rootfs\",
        \"path_on_host\": \"./hello-rootfs.ext4\",
        \"is_root_device\": true,
        \"is_read_only\": false

With everything set to go, I can launch a guest machine:

# curl --unix-socket /tmp/firecracker.sock -i \
    -X PUT "http://localhost/actions" \
    -H  "accept: application/json" \
    -H  "Content-Type: application/json" \
    -d "{
        \"action_type\": \"InstanceStart\"

And I am up and running with my first VM:

In a real-world scenario I would script or program all of my interactions with Firecracker, and I would probably spend more time setting up the networking and the other I/O. But re:Invent awaits and I have a lot more to do, so I will leave that part as an exercise for you.

Collaborate with Us
As you can see this is a giant leap forward, but it is just a first step. The team is looking forward to telling you more, and to working with you to move ahead. Star the repo, join the community, and send us some code!


Let's block ads! (Why?)

Read the whole story
19 days ago
Share this story

Dive: A tool for exploring a docker image, layer contents

1 Share

Go Report Card

A tool for exploring a docker image, layer contents, and discovering ways to shrink your Docker image size.


To analyze a Docker image simply run dive with an image tag/id/digest:

dive <your-image-tag>

or if you want to build your image then jump straight into analyzing it:

dive build -t <some-tag> .

This is beta quality! Feel free to submit an issue if you want a new feature or find a bug :)

Basic Features

Show Docker image contents broken down by layer

As you select a layer on the left, you are shown the contents of that layer combined with all previous layers on the right. Also, you can fully explore the file tree with the arrow keys.

Indicate what's changed in each layer

Files that have changed, been modified, added, or removed are indicated in the file tree. This can be adjusted to show changes for a specific layer, or aggregated changes up to this layer.

Estimate "image efficiency"

The lower left pane shows basic layer info and an experimental metric that will guess how much wasted space your image contains. This might be from duplicating files across layers, moving files across layers, or not fully removing files. Both a percentage "score" and total wasted file space is provided.

Quick build/analysis cycles

You can build a Docker image and do an immediate analysis with one command: dive build -t some-tag .

You only need to replace your docker build command with the same dive build command.



wget https://github.com/wagoodman/dive/releases/download/v0.3.0/dive_0.3.0_linux_amd64.deb
sudo apt install ./dive_0.3.0_linux_amd64.deb


wget https://github.com/wagoodman/dive/releases/download/v0.3.0/dive_0.3.0_linux_amd64.rpm
rpm -i dive_0.3.0_linux_amd64.rpm


brew tap wagoodman/dive
brew install dive

or download a Darwin build from the releases page.

Go tools

go get github.com/wagoodman/dive


docker pull wagoodman/dive


docker pull quay.io/wagoodman/dive

When running you'll need to include the docker client binary and socket file:

docker run --rm -it \
    -v /var/run/docker.sock:/var/run/docker.sock \
    wagoodman/dive:latest <dive arguments...>

Docker for Windows (showing PowerShell compatible line breaks; collapse to a single line for Command Prompt compatibility)

docker run --rm -it `
    -v /var/run/docker.sock:/var/run/docker.sock `
    wagoodman/dive:latest <dive arguments...>

Note: depending on the version of docker you are running locally you may need to specify the docker API version as an environment variable:

   DOCKER_API_VERSION=1.37 dive ...

or if you are running with a docker image:

docker run --rm -it \
    -v /var/run/docker.sock:/var/run/docker.sock \
    wagoodman/dive:latest <dive arguments...>


Key Binding Description
Ctrl + C Exit
Tab or Ctrl + Space Switch between the layer and filetree views
Ctrl + F Filter files
Ctrl + A Layer view: see aggregated image modifications
Ctrl + L Layer view: see current layer modifications
Space Filetree view: collapse/uncollapse a directory
Ctrl + A Filetree view: show/hide added files
Ctrl + R Filetree view: show/hide removed files
Ctrl + M Filetree view: show/hide modified files
Ctrl + U Filetree view: show/hide unmodified files
PageUp Filetree view: scroll up a page
PageDown Filetree view: scroll down a page


No configuration is necessary, however, you can create a config file and override values:

  enabled: true
  path: ./dive.log
  level: info

# Note: you can specify multiple bindings by separating values with a comma.
# Note: UI hinting is derived from the first binding
  # Global bindings
  quit: ctrl+c
  toggle-view: tab, ctrl+space
  filter-files: ctrl+f, ctrl+slash

  # Layer view specific bindings  
  compare-all: ctrl+a
  compare-layer: ctrl+l

  # File view specific bindings
  toggle-collapse-dir: space
  toggle-added-files: ctrl+a
  toggle-removed-files: ctrl+r
  toggle-modified-files: ctrl+m
  toggle-unmodified-files: ctrl+u
  page-up: pgup
  page-down: pgdn
  # You can change the default files show in the filetree (right pane). All diff types are shown by default. 
    - added
    - removed
    - changed
    - unchanged

  # The default directory-collapse state
  collapse-dir: false

  # The percentage of screen width the filetree should take on the screen (must be >0 and <1)
  pane-width: 0.5

  # Enable showing all changes from this layer and ever previous layer
  show-aggregated-changes: false

dive will search for configs in the following locations:

  • ~/.dive.yaml
  • $XDG_CONFIG_HOME/dive.yaml
  • ~/.config/dive.yaml

Let's block ads! (Why?)

Read the whole story
20 days ago
Share this story

Wasp's venom kills cancer cells without harming normal cells


Brazilian wasp’s venom contains special chemicals able to eliminate and completely kill cancer and bacterial cells.

The “Brazilian wasp” is one of the best aggressive species  of social wasps on the planet, however, their venom can do something that someone would call “a miracle”.

Polybia paulista is a species of eusocial wasp occurring in Brazil. It’s venom contains a molecule called “MP1”.

MP1 kills cancer cells by creating holes on their lipid membrane. This causes molecules crucial for cancer cell survival to leak out, that kills cancer cells in just a few seconds. Note that MP1 can also kill bacterial cells.

MP1 is selectively killing the cancer cells without harming the normal cells at all. However, more studies are still needed on order to explore in more depth the potential of the MP1 as a cancer treatment drug.

Cancer therapies that attack the lipid composition of the cell membrane would be an entirely new class of anticancer drugs, and would enable a possible treatment for cancer in general.

This could be the start of a age in medicine, where new combination therapies where multiple drugs are used simultaneously to treat cancer.

If everything goes well, we may experience the first cancer treatment, by attacking different parts of the cancer cells at the same time.

However, this isn’t the first venom being explored as a potential cancer drug. Scorpion venom has been gaining interest as a source of new drugs that contain a mixture of biological chemicals called peptides.

Some peptides are known to trigger cell death by forming pores in biological membranes. Cell death can be useful if we are able to target tumour cells to auto-destruct.

These toxins can have very potent effects, one particular small peptide known as TsAP-1 isolated from the Brazilian yellow scorpion, also known as Tityus serrulatus has both anti-microbial and anti-cancer properties.

Experiments show that some of these substances have the ability to blind selectively to cancer cells and inhibit their growth.

Let's block ads! (Why?)

Read the whole story
33 days ago
33 days ago
Share this story
Next Page of Stories