97 stories
·
5 followers

Kubernetes made my latency 10x higher

1 Share

Last week my team was busy with the migration of one microservice to our central platform, which bundles CI/CD, a Kubernetes based runtime, metrics and other goodies. This exercise is meant to provide a blueprint to accelerate moving a fleet of ~150 more in the coming months, all of which power some of the main online marketplaces in Spain (Infojobs, Fotocasa, etc.)

After we had the application deployed in Kubernetes and routed some production traffic to it, things started looking worrisome. Request latencies in the Kubernetes deployment were up to x10 higher than on EC2. Unless we found a solution this would not only become a hard blocker to migrate this microservice, but could also jettison the entire project.

Why is latency so much higher in Kubernetes than EC2?

To pinpoint the bottleneck we collected metrics for the entire request path. The architecture is simple, an API Gateway (Zuul) that proxies requests to the microservice instances in EC2 or Kubernetes. In Kubernetes, we use the NGINX Ingress controller and backends are ordinary Deployment objects running a JVM application based in Spring.

                                  EC2
                            +---------------+
                            |  +---------+  |
                            |  |         |  |
                       +-------> BACKEND |  |
                       |    |  |         |  |
                       |    |  +---------+  |                   
                       |    +---------------+
             +------+  |
Public       |      |  |
      -------> ZUUL +--+
traffic      |      |  |              Kubernetes
             +------+  |    +-----------------------------+
                       |    |  +-------+      +---------+ |
                       |    |  |       |  xx  |         | |
                       +-------> NGINX +------> BACKEND | |
                            |  |       |  xx  |         | |
                            |  +-------+      +---------+ |
                            +-----------------------------+

The problem seemed to be upstream latency at the backend (I marked the connection with “xx” in the graph). When the application was deployed in EC2 it took ~20ms to respond. In Kubernetes it was taking 100-200 ms.

We quickly discarded likely suspects that could have appeared with the change of runtime. The JVM version was identical. Issues related to containerisation were discarded as the application already ran in containers on EC2. It wasn’t related to load, as we saw high latencies even with 1 request per second. GC pauses were negligible.

One of our Kubernetes admins asked whether the application had any external dependencies as DNS resolution had caused similar problems in the past, this was our best hypothesis so far.

Hypothesis 1: DNS resolution

On every request, our application makes 1-3 queries to an AWS ElasticSearch instance at a domain similar to elastic.spain.adevinta.com. We got a shell inside the containers and could verify that DNS resolution to that domain was taking too long.

DNS queries from the container:

[root@be-851c76f696-alf8z /]# while true; do dig "elastic.spain.adevinta.com" | grep time; sleep 2; done
;; Query time: 22 msec
;; Query time: 22 msec
;; Query time: 29 msec
;; Query time: 21 msec
;; Query time: 28 msec
;; Query time: 43 msec
;; Query time: 39 msec

The same queries from one of the EC2 instances that run this application:

bash-4.4# while true; do dig "elastic.spain.adevinta.com" | grep time; sleep 2; done
;; Query time: 77 msec
;; Query time: 0 msec
;; Query time: 0 msec
;; Query time: 0 msec
;; Query time: 0 msec

Given the ~30ms resolution time, it seemed clear that our application was adding DNS resolution overhead talking to its ElasticSearch.

But this was strange for two reasons:

  • We already have many applications in Kubernetes that communicate with AWS resources and don’t suffer such high latencies. Whatever the cause it had to be be specific to this one.
  • We know that the JVM implements in-memory DNS caching. Looking at the configuration in these images, the TTL configured at $JAVA_HOME/jre/lib/security/java.security and it was set to networkaddress.cache.ttl = 10. The JVM should be caching all DNS queries for 10 seconds.

To confirm the DNS hypothesis we decided to avoid DNS resolution and see if the problem disappeared. Our first attempt was to have the application talk directly to the Elasticsearch IP, rather than the domain name. This required changing code and a new deploy so instead we simply added a line mapping the domain to its IP in /etc/hosts:

34.55.5.111 elastic.spain.adevinta.com

This way the container would resolve the IP almost instantly. We did observe a relative improvement, but nowhere near our target latency. Even though DNS resolution was too high, there real cause was still hidden.

Network plumbing

We decided to tcpdump from the container so we could see exactly what was going on with the network.

[root@be-851c76f696-alf8z /]# tcpdump -leni any -w capture.pcap

We then sent a few requests and downloaded the capture (kubectl cp my-service:/capture.pcap capture.pcap) to inspect it with Wireshark.

There was nothing suspicious with DNS queries (except a detail I’ll mention later.) But something in the way our service handled each request seemed strange. Below is a screenshot of the capture, showing the reception of a request until the start of the response.

The packet numbers are shown in the first column. I coloured the different TCP streams for clarity.

The green stream starting at packet 328 shows the client (172.17.22.150) opened a TCP connection to our container (172.17.36.147). After the initial handshake (328-330), packet 331 brought an HTTP GET /v1/.., the incoming request to our service. The whole process was over in 1ms.

The grey stream from packet 339 shows that our service sent an HTTP request to the Elasticsearch instance (you don’t see the TCP handshake because it was using an existing TCP connection.) This took 18ms.

So far this makes sense, and the times roughly fit in the overall response latencies we expected (~20-30ms measured from the client).

But inbetween both exchanges, the blue section consumes 86ms. What was going on there? At packet 333, our service sent an HTTP GET request to /latest/meta-data/iam/security-credentials, and right after, on the same TCP connection, another GET to /latest/meta-data/iam/security-credentials/arn:...

We verified that this was happening on every single request for the entire trace. DNS resolution is indeed a bit slower in our containers (the explanation is interesting, I will leave that for another post). But the actual cause for the high latencies were queries to the AWS Instance Metadata service on every single request.

Hypothesis 2: rogue calls to AWS

Both endpoints are part of the AWS Instance Metadata API. This service is used from our microservice during reads from Elasticsearch. The two calls are a basic authorisation workflow. The endpoint queried in the first request yields the IAM role associated to the instance.

/ # curl http://169.254.169.254/latest/meta-data/iam/security-credentials/
arn:aws:iam::<account_id>:role/some_role

The second request queries the second endpoint for temporary credentials for that instance:

/ # curl http://169.254.169.254/latest/meta-data/iam/security-credentials/arn:aws:iam::<account_id>:role/some_role`
{
    "Code" : "Success",
    "LastUpdated" : "2012-04-26T16:39:16Z",
    "Type" : "AWS-HMAC",
    "AccessKeyId" : "ASIAIOSFODNN7EXAMPLE",
    "SecretAccessKey" : "wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY",
    "Token" : "token",
    "Expiration" : "2017-05-17T15:09:54Z"
}

The client is able to use them for a short period of time, and is expected to retrieve new ones periodically (before the Expiration deadline). The model is simple: AWS rotates temporary keys often for security reasons, but clients can cache them for a few minutes amortizing the performance penalty of retrieving new credentials.

The AWS Java SDK should be taking care of this process for us but, for some reason, it is not.

Searching among its GitHub issues we landed on #1921 which gave us the clue we needed.

The AWS SDK refreshes credentials when one of two conditions is met:

  • Expiration is within an EXPIRATION_THRESHOLD, hardcoded to 15 minutes.
  • The last attempt to refresh credentials is greater than the REFRESH_THRESHOLD, hardcoded to 60 minutes.

We wanted to see the actual expiration time in the certificates we were getting so we ran the two cURL commands shown above against the AWS API, both from the container and EC2 instance. The one retrieved from the container was much shorter: exactly 15 mins.

The problem was now clear: our service would fetch temporary credentials for the first request. Since these had a 15 min expiration time, in the next request, the AWS SDK would decide to eagerly refresh them. The same would happen on every request.

Why was the credential expiration time shorter?

The AWS Instance Metadata Service is meant to be used from an EC2 instance, not Kubernetes. It is still convenient to let applications retain the same interface. For this we use KIAM, a tool that runs an agent on each Kubernetes node, allowing users (engineers deploying applications to the cluster) to associate IAM roles to Pod containers as if they were EC2 instances. It works by intercepting calls to the AWS Instance Metadata service and serving them from its own cache, pre-fetched from AWS. From the point of view of the application, there is no difference with running in EC2.

KIAM happens to provide short-lived credentials to Pods, which makes sense as it’s fair to assume that the average lifetime of a Pod is shorter than EC2 instances. The default is precisely 15 min.

But if you put both defaults together, you have a problem. Each certificate provided to the application has a 15 min expiration time. The AWS Java SDK will force refreshing any certificate with less than 15 min expiration time left.

The result is that every request will be forced to refresh the temporary certificate, which requires two calls to the AWS API that add a huge latency penalty to each request. We later found a feature request in the AWS Java SDK that mentions this same issue.

The fix was easy. We reconfigured KIAM to request credentials with a longer expiration period. Once this change was applied, requests started being served without involving the AWS Metadata service and returned to an even lower latency than in EC2.

Takeaways

In our experience with these migrations, one of the most frequent sources of problems is not bugs in Kubernetes or other pieces of the platform. It isn’t either about fundamental flaws in the microservices we’re migrating. Problems often appear just because we put some pieces together in the first place.

We are blending complex systems that had never interacted together before with the expectation that they collaborate forming a single, larger system. More moving pieces, more surface for failures, more entropy.

In this case, our high latency wasn’t the result of bugs or bad decisions in Kubernetes, KIAM, the AWS Java SDK, or our microservice. It was a behaviour resulting from the combination of two independent defaults, in KIAM and the AWS Java SDK. Both choices make sense in isolation: an eager credential refresh policy in the AWS Java SDK, and the lower default expiration in KIAM. It is when they come together that the results are pathological. Two independently sound decisions don’t necessarily make sense together.

Let's block ads! (Why?)

Read the whole story
pbouwdewijn
5 days ago
reply
Share this story
Delete

How to Buy Drugs

1 Share

Misha Glenny and Callum Lang

In early February this year, what appeared to be a website glitch sent thousands of drugs-buyers into a panic. Liam (not his real name), a student at Manchester University, needed to buy some MDMA for the weekend’s big party. So he did what he had been doing for the last two years: he opened up the Tor browser to get on to the dark web, and typed in the address for Dream Market, the world’s biggest and most dependable source of illegal drugs. Nothing happened. When he tried again, a message popped up on the screen: ‘Hmm. We’re having trouble finding that site. Here are three things you can try: try again later; check your network connection; if you are connected but behind a firewall, check that Tor Browser has permission to access the web.’ Dream Market’s usually excellent customer forum wasn’t working either. It wasn’t until he checked the chatter on Dread, another dark web forum, that Liam discovered that Dream Market had come under attack, through a Distributed Denial of Service (DDOS). A ‘botnet’, a network of zombie computers, had been instructed to access the site repeatedly, triggering its collapse by overloading it – a standard technique used by both cybercriminals and government agencies to disrupt online activity. For as long as this went on, nobody would be able to access Dream Market at all. On Dread, many shared Liam’s pain. ‘I give it two days before I start freaking out all crazy,’ warned Genghis the Xanlord, his moniker suggesting he was fretting about his supply of benzodiazepines (Xanax for preference, but Valium would be fine).

Dream Market had enjoyed an almost complete monopoly on the online illegal drugs trade since 2017, when a combined operation of European and American law enforcement agencies seized the servers of the Hansa and AlphaBay markets, its two most recent predecessors. (The first, biggest and most famous, Silk Road, was taken down in 2013.) Before this year’s DDOS attack, Dream Market had boasted between one and two million users, the majority from just five countries: the UK, Germany, France, the Netherlands and the US. Its customers bought more MDMA – the most sought-after item on the dark web in the UK – than anything else, but cocaine, marijuana, heroin, Xanax, ketamine and LSD weren’t far behind. Prices were reasonable, quality was assured, delivery was fast. By 2013, when Dream Market opened for business, the dark web already accounted for 15 per cent of the illegal drugs sold to UK residents; this year, 29 per cent of drug users admitted to having bought online. The growth of dark web markets has been rapid, and with good reason: customer reviews, much as with Amazon, give a sound indication that you’ll be getting what you paid for; put in your order on a Tuesday and you can pretty much guarantee that an unsuspecting Royal Mail employee will be slotting a package through your letterbox by Friday at the latest. Gaining access to such sites is barely any more difficult than using the everyday web: you just need a little practice with the Tor browser and a Bitcoin wallet – a lesson from a friend, perhaps, or a brief YouTube tutorial – and you’re rewarded with the promise of anonymity, security, reliability and convenience.

New technologies and globalised demand have made it vastly easier to shift and sell illegal drugs with low risk of detection. Once the products reach the UK, there are now four major methods of distribution. The simplest and most old-fashioned, with the highest risk for consumers, involves opportunistic purchases on the street. Word will get out that one corner or another is the place to go to get hold of your drug of choice: this may be your only option if, say, you’re a first-time buyer in a big city or haven’t yet managed to find a network you trust. The quality of the drugs varies dramatically and the chances of being ripped off, arrested or physically attacked are relatively high. But if you’re prepared to pay, just ask around and you will find.

Next, if you are a drug user in a small town or a rural region, there are the ‘county lines’ networks, named for the local phone number advertised to potential buyers in a particular area. County lines networks have attracted a lot of publicity recently. In a major blitz, the police claim to have arrested 743 of the sellers involved and seized drugs worth more than £400,000, along with guns, knives, swords and machetes. The National Crime Agency estimates that the county lines trade is worth £500 million a year, so this wasn’t quite the coup it was presented as being: it’s a 0.1 per cent tip of the iceberg. ‘If you want to start a county line, you take a train to Bournemouth with ten wraps of crack and ten wraps of heroin,’ said Tyler, who operated a county line on the south coast for a few months. ‘You find a user and offer free samples to show the quality of your product. You then strike a deal with the user – if you can deal from their home, using their local contacts, you will continue to supply them with free drugs.’ If the runner is unable to ‘cuckoo’, as this practice is known, he will rent an Airbnb in town and conduct operations from there.

County lines customers tend to be from dysfunctional backgrounds. Those in charge of the networks, who usually operate from bases in the cities, view their clients with a degree of contempt, calling them ‘nitties’ and ‘fiends’. Their biggest-selling products are crack and heroin; the bosses and runners avoid using the drugs themselves. They have little regard for customer satisfaction: they aren’t going to lose sleep over the occasional dead junkie, except as a possible security risk: a death could attract the interest of the police and local media. Nor do they have much regard for those lower down the distribution ladder: these are often teenagers or – the police said after the latest operation – ‘children as young as seven’ who have been coerced or bribed by older gang members. As far as the bosses are concerned they are entirely expendable. After a police operation in May, Sky News reported that ‘519 vulnerable adults and 364 children [were] taken into safeguarding.’ The police often seek to charge those running county lines networks under modern slavery legislation.

A step up from the county line networks is the urban full-service party supplier. Fallowfield in Manchester, Hyde Park in Leeds and Camden Town in London: three places where you may be handed a smart business card indicating that full service is available. One such card recently distributed in Ladbroke Grove had a phone number printed under the name Omar and beneath that the Givenchy logo. But Omar wasn’t selling perfume. Over the last decade, full-service suppliers have placed a good deal of emphasis on customer satisfaction. Their well-off, well-educated clients can afford to be more discerning than county lines users and if they aren’t satisfied they will turn to the dark web instead. So full service means offering quality products at competitive prices with a decent guarantee of security from arrest.

One way full-service dealers advertise their wares is through Snapchat or WhatsApp: they will include a carefully illustrated menu on a ‘story’ – a broadcast to all their contacts that automatically disappears after 24 hours, making it hard for police to trace. All the customer has to do is find out whom to befriend. A little while ago a student dealer in Bristol, nicknamed Narcs, was offering his followers four strains of hash (Malana Cream! Blonde Critical Leb!), various types of weed (Stardawg!), plus ketamine, MDMA, cocaine, speed, LSD, psilocybin, Valium and Xanax. On a formidable menu like this, an emoji is placed beside each item for easy identification of the drug in question. Ketamine is illustrated with a picture of a horse in reference to its origins as an animal tranquilliser. Dutch speed – a loose term that could include any type of amphetamine – has a little man running by in a puff of smoke. Once you’ve placed your order, you should expect to pick up the drugs at the designated rendezvous point within an hour or so. If you find yourself having to wait any longer you may want to choose another supplier: there are plenty of other Omars and Narcses out there. They usually deliver in black Audis or Mercedes – not exactly inconspicuous, but reassuringly expensive. They are courteous and efficient. The competition is stiff, so they strive to ensure that your customer experience exceeds anything their rivals can offer.

This focus on customer satisfaction is a direct consequence of the rise of the dark web markets, the fourth major distribution network. The internet has dramatically improved the experience of drug buyers. The market share of a dark web outlet depends almost entirely on its online reputation. Just as on Amazon or eBay, customer reviews will describe the quality of purchased products as well as reporting on shipping time and the responsiveness of vendors to queries or complaints. If drugs that a buyer has paid for don’t turn up – as once happened to Liam, the Manchester student – a savvy vendor will reship the items without asking for further payment, in the hope of securing the five-star customer reviews they depend on.

As a consequence, the drugs available to the informed buyer are of a higher quality than ever before. They are also safer. The administrators of DNStars.vip – a site on the open web which you don’t need Tor to visit – pose as ordinary users in order to buy samples of popular drugs from major vendors. They then have the drugs chemically tested to see whether they match the seller’s description. This valuable service can save lives. Take the example of a supplier called Monoko, who until recently was one of the largest distributors of benzodiazepines in the UK. He claimed to be selling alprazolam, the generic name for Xanax, but earlier this year DNStars tested his product and reported that what he was actually shipping was doxepin – a far more potent drug that is lethal in large doses. These days, if you dupe your customers you don’t stay in business for long.

Up against the entrepreneurial drive and technological innovations of the retail drug markets, the police are at a disadvantage. It doesn’t help that since the onset of austerity measures in 2010 the UK’s police forces have lost 20,000 officers, 15,000 administrative staff and 600 police stations. As part of his campaign for the Conservative Party leadership, Boris Johnson promised (implausibly) that he would restore the number of officers, if not the admin personnel. But even if they were at full strength the police would struggle to penetrate the dark web markets, whose encryption and anonymity are almost impossible to crack unless the people who run the markets make mistakes. And even when such mistakes do occur – a reused IP address here, a familiar username there – joining the dots and fingering the culprits requires highly skilled computer operatives who are up to date in security and cryptographic techniques and have an advanced understanding of the blockchain technology that underpins the trading. There is a dearth of such operatives in Britain and indeed around the world. We estimate that the median annual salary for a cyber security engineer in the private sector is £56,000. To earn that much in the Metropolitan Police you would need to reach the rank of chief inspector. The incentives for computer experts to work in the public sector are few.

A few UK forces have built up specialist cyber units: notably, the City of London Police, the Met and the National Crime Agency (NCA). Given the constraints, the NCA – which assumes most of the responsibility for the investigation of dark web markets – has had to prioritise its targets. Top of the list – understandably – is fentanyl, the synthetic opioid of choice for those who can no longer get hold of OxyContin after becoming addicted. Fentanyl is one hundred times more powerful than morphine and can be lethal even in low doses; it is often sold mixed with heroin to increase the intensity of the high. The fentanyl crisis began in the US, where it is still raging, but it has had a knock-on effect in Britain (there have been a number of deaths in North-East England in particular). It seems that UK dealers have been buying fentanyl from China, where most of it is manufactured, in order to sell it on to the US – profiting from the fact that drugs coming from the UK are less likely to be intercepted by US law enforcement than those coming directly from China. And, according to a well-established pattern, a country that becomes a transit zone for drugs soon develops its own habit. But, interestingly, the dark web itself seems to have helped keep Britain’s fentanyl habit in check. Before it was taken down by the DDOS attack, Dream Market ‘almost did the work of the police themselves’, according to Lawrence Gibbons, the NCA’s head of Drug Threat, by prohibiting the sale of fentanyl through its site. Coke, heroin and cannabis were one thing, but ‘fentanyl and firearms … attract law enforcement attention to a much higher degree’ – the sort of attention a well-run business avoids.

*

By their own admission, UK police can’t contain the drugs trade on the dark web. They can occasionally disrupt and deter. Every time one site is taken down a new market, with improved security techniques, is ready to take the number one spot. But this leaves unanswered the question of what happened to Dream Market. When a site like this stops working, one of two things is probably happening. First, a law enforcement agency may have taken control of it, in which case buyers and sellers alike can say goodbye to any cryptocurrency they may have locked in the site’s escrow system. What’s more, police may be harvesting the data – IP addresses, even home addresses – of users who haven’t taken the necessary care to obscure their details: a dreadful thought for everyday drug buyers. The second possibility is known as the exit scam. Here the site’s own administrators decide to appropriate, at a stroke, all the money sitting in the escrow accounts – and then, essentially, do a runner. On dark web forums, the exit scam generates far more fury than a police takedown, since scammers have failed to uphold the honour among thieves on which commercial criminal activity relies.

But what happened to Dream Market didn’t look like an exit scam. The DDOS attack lasted almost three months. And then, at the beginning of April, without warning, users suddenly found they could log in again – only to see a message stating that the market would soon be ceasing activities altogether and that after 30 April people would no longer be able to withdraw funds from Bitcoin wallets on the site. The threat of seized funds had an air of the exit scam, but when users requested their money back before the deadline, it was returned as promised. It only became clear that something more complicated might be going on when, following Dream Market’s announcement of its imminent closure, another site, Wall Street Market, took over as the biggest dark web market in the world – in the space of about a week.

In early May, Europol and German police made their move. Three German nationals – WSM’s administrators – were arrested, and the site’s server was seized by Finnish customs. The FBI, which was also involved in the operation, issued an indictment that bigged up the scale of the achievement: ‘WSM was one of the largest and most voluminous dark net marketplaces of all time, made up of approximately 5400 vendors and 1,150,000 customers around the world.’ In fact, for most of its existence WSM had been no more than a peripheral competitor to Dream Market, with just a few thousand users; many dark web customers avoided it because it was poorly designed, unreliable and not necessarily to be trusted with regard to security. The sceptics turned out to be right: as large numbers of Dream Market’s ex-customers flooded in, WSM’s administrators perpetrated a real exit scam, removing millions of dollars in Bitcoin from users’ escrow wallets.

If it hadn’t been for this brazen theft the police might have waited longer before making their arrests. But the rapidity of events made it clear that they already knew the identities of WSM’s administrators, and that – unlike Dream Market, with its high-level security – it was a site that could easily be brought down by low-tech means. Could it be that the DDOS directed against Dream Market – the only way law enforcement could interfere with its activities – was arranged in order to herd its users elsewhere, to a poorly defended site that the police already effectively controlled? No official spokesman from the police forces involved would confirm this but a senior German law enforcement official who had previously worked at Europol admitted to us that this was what happened. So law enforcement knows a few tricks too. But any impact on the drugs market of an operation like this is only ever temporary. New sites with new rules are popping up all the time. The party continues.

Let's block ads! (Why?)

Read the whole story
pbouwdewijn
8 days ago
reply
Share this story
Delete

PostgREST

1 Share
_images/logo.png
https://img.shields.io/github/stars/postgrest/postgrest.svg?style=social https://img.shields.io/github/release/PostgREST/postgrest.svg https://img.shields.io/docker/pulls/postgrest/postgrest.svg https://img.shields.io/badge/gitter-join%20chat%20%E2%86%92-brightgreen.svg https://img.shields.io/badge/Donate-Patreon-orange.svg?colorB=F96854 https://img.shields.io/badge/Donate-PayPal-green.svg

PostgREST is a standalone web server that turns your PostgreSQL database directly into a RESTful API. The structural constraints and permissions in the database determine the API endpoints and operations.

Using PostgREST is an alternative to manual CRUD programming. Custom API servers suffer problems. Writing business logic often duplicates, ignores or hobbles database structure. Object-relational mapping is a leaky abstraction leading to slow imperative code. The PostgREST philosophy establishes a single declarative source of truth: the data itself.

It’s easier to ask PostgreSQL to join data for you and let its query planner figure out the details than to loop through rows yourself. It’s easier to assign permissions to db objects than to add guards in controllers. (This is especially true for cascading permissions in data dependencies.) It’s easier to set constraints than to litter code with sanity checks.

There is no ORM involved. Creating new views happens in SQL with known performance implications. A database administrator can now create an API from scratch with no custom programming.

PostgREST has a focused scope. It works well with other tools like Nginx. This forces you to cleanly separate the data-centric CRUD operations from other concerns. Use a collection of sharp tools rather than building a big ball of mud.

The project has a friendly and growing community. Join our chat room for discussion and help. You can also report or search for bugs/features on the Github issues page.

Are you new to PostgREST? This is the place to start!

Also have a look at Installation.

Technical references for PostgREST’s functionality.

These are recipes that’ll help you address specific use-cases.

Explanations of some key concepts in PostgREST.

PostgREST has a growing ecosystem of examples, libraries, and experiments. Here is a selection.

Here we’ll include the most relevant changes so you can migrate to newer versions easily. You can see the full changelog of each release in the PostgREST repository.

Here are some companies that use PostgREST in production.

“It’s so fast to develop, it feels like cheating!”

—François-G. Ribreau

“I just have to say that, the CPU/Memory usage compared to our Node.js/Waterline ORM based API is ridiculous. It’s hard to even push it over 60/70 MB while our current API constantly hits 1GB running on 6 instances (dynos).”

—Louis Brauer

“I really enjoyed the fact that all of a sudden I was writing microservices in SQL DDL (and v8 javascript functions). I dodged so much boilerplate. The next thing I knew, we pulled out a full rewrite of a Spring+MySQL legacy app in 6 months. Literally 10x faster, and code was super concise. The old one took 3 years and a team of 4 people to develop.”

—Simone Scarduzio

“I like the fact that PostgREST does one thing, and one thing well. While PostgREST takes care of bridging the gap between our HTTP server and PostgreSQL database, we can focus on the development of our API in a single language: SQL. This puts the database in the center of our architecture, and pushed us to improve our skills in SQL programming and database design.”

—Eric Bréchemier, Data Engineer, eGull SAS

“PostgREST is performant, stable, and transparent. It allows us to bootstrap projects really fast, and to focus on our data and application instead of building out the ORM layer. In our k8s cluster, we run a few pods per schema we want exposed, and we scale up/down depending on demand. Couldn’t be happier.”

—Anupam Garg, Datrium, Inc.

Let's block ads! (Why?)

Read the whole story
pbouwdewijn
10 days ago
reply
Share this story
Delete

Sea urchin population soars 100x in five years, devastating US coastline

1 Share

[unable to retrieve full-text content]

Comments
Read the whole story
pbouwdewijn
19 days ago
reply
Share this story
Delete

New Query Language for Graph Databases to Become International Standard

1 Share

Neo4j Backs Launch of GQL Project: First New ISO Database Language Since SQL

SAN MATEO, Calif. – September 17, 2019  – Neo4j, the leader in graph databases, announced today that the international committees that develop the SQL standard have voted to initiate GQL (Graph Query Language) as a new database query language. Now to be codified as the international standard declarative query language for property graphs, GQL represents the culmination of years of effort by Neo4j and the broader database community.

Neo4j News: GQL to incorporate and consider several graph database languages

The initiative for GQL was first advanced in the GQL Manifesto in May 2018. A year later, the project was considered at an international gathering in June. Ten countries including the United States, Germany, UK, Korea, and China have now voted in favor, with seven countries promising active participation by national experts.

It has been well over 30 years since ISO/IEC began the SQL project. SQL went on to become the dominant language for accessing relational data, achieving wide adoption by vendors and practitioners and dramatically accelerating the growth of the relational database market. The GQL project will initiate development of the next generation of technology standards for accessing data, optimized for today’s world of connected data. Its charter mandates building on core foundations already established by SQL, as well as ongoing collaboration to ensure SQL and GQL compatibility and interoperability. 

Stefan Plantikow, Product Manager and Standards Engineer for Property Graph Querying at Neo4j, serves as a GQL project lead and editor of the planned GQL specification. He has many years of experience developing the Cypher language, a key source for GQL.

“I believe now is the perfect time for the industry to come together and define the next generation graph query language standard,” said Plantikow. “It’s great to receive formal recognition of the need for a standard language. Building upon a decade of experience with property graph querying, GQL will support native graph data types and structures, its own graph schema, a pattern-based approach to data querying, insertion and manipulation, and the ability to create new graphs, and graph views, as well as generate tabular and nested data. Our intent is to respect, evolve, and integrate key concepts from several existing languages including graph extensions to SQL.”

GQL reflects fast growth in the graph database market, demonstrated by increasing adoption of the Cypher language, which has shown potential and powered the demand for a single, standard language to play the role of SQL for graph databases. 

In addition to Neo4j, many companies are already taking part in GQL-related activities including Redis Labs, SAP and IBM. National experts from China, Korea are joining existing participants centred in Europe and the U.S. 

Keith Hare, who has been active in the SQL Standards process since 1988 and has served as the chair of the international SQL standards committee for database languages since 2005, charted the progress toward GQL. 

“We have reached a balance of initiating GQL, the database query language of the future whilst preserving the value and ubiquity of SQL,” said Hare. “Our committee has been heartened to see strong international community participation to usher in the GQL project.  Such support is the mark of an emerging de jure and de facto standard .”

For more information about graph query language standardization It’s Time for a Single Property Graph Query Language on the Neo4j blog.

Share this on Twitter

Resources

About Neo4j

Neo4j is the leading graph database platform that drives innovation and competitive advantage at Airbus, Comcast, eBay, NASA, UBS, Walmart and more. Thousands of community deployments and more than 300 customers harness connected data with Neo4j to reveal how people, processes, locations and systems are interrelated. Using this relationships-first approach, applications built using Neo4j tackle connected data challenges including artificial intelligence, fraud detection, real-time recommendations and master data. Find out more at neo4j.com.      

Contact:
Neo4j Media Hotline:
pr@neo4j.com
neo4j.com/pr  

Let's block ads! (Why?)

Read the whole story
pbouwdewijn
57 days ago
reply
Share this story
Delete

Unprotected database exposes sensitive data of over 20M Ecuadoreans

1 Share
flag, ecuador, home
  • The leaky database exposes the personal information of individuals and their family members, employment details, financial information, automotive records, and more.
  • The exposed information appears to be obtained from third-party sources including Ecuadorian government registries, an automotive association called Aeade, and an Ecuadorian national bank named Biess.

What happened?

Security researchers from vpnMentor, Noam Rotem and Ran Locar uncovered an unprotected Elasticsearch database belonging to a consulting company named Novaestrat.

What is the impact?

The leaky database contained around 18 GB of data, impacting over 20 million individuals in Ecuador by exposing their sensitive personal information to the public.

  • The exposed information appears to be obtained from third-party sources including Ecuadorian government registries, an automotive association called Aeade, and an Ecuadorian national bank named Biess.
  • The leaked records also included an entry for WikiLeaks founder Julian Assange.

“This data breach is particularly serious simply because of how much information was revealed about each individual. Scammers could use this information to establish trust and trick individuals into exposing more information,” said the researchers.

Researchers who uncovered the leaky database contacted the owner of the database and promptly secured the database.

What information was exposed?

The leaky database exposes the personal information of individuals and their family members, employment details, financial information, automotive records, and more.

  • The unsecured database has exposed the personal information of individuals such as their names, gender, dates of birth, place of birth, addresses, email addresses, phone numbers, marital status, date of marriage if married, date of death if deceased, and educational details.
  • The database contained financial information related to accounts held with the Ecuadorian national bank Biess. The financial data includes account status, current balance in the account, amount financed, credit type, location, and contact information.
  • The leaky database included information about the individual's family members such as the names of their mother, father, and spouse along with their “cedula” value, which may be a national identification number.
  • The database exposed various automotive records including car’s license plate number, make, model, date of purchase, most recent date of registration, and other technical details about the model.
  • Individuals’ detailed employment information including employer name, employer location, employer tax identification number, job title, salary information, job start date, and end date were also exposed.
  • The unsecured database also revealed details related to various companies in Ecuador.

“The data breach could also have an impact on Ecuadorian companies. The leaked data included information about many companies’ employees, as well as details about some companies themselves. These companies may be at risk of business espionage and fraud,” researchers said in a blog.

Let's block ads! (Why?)

Read the whole story
pbouwdewijn
59 days ago
reply
Share this story
Delete
Next Page of Stories