37 stories

Go best practices, six years in


(This article was originally a talk at QCon London 2016. Video and slides here.)

In 2014, I gave a talk at the inaugural GopherCon titled Best Practices in Production Environments. We were early adopters at SoundCloud, and by that point had been writing, running, and maintaining Go in production in one form or another for nearly 2 years. We had learned a few things, and I tried to distill and convey some of those lessons.

Since then, I’ve continued working in Go full-time, later on the activities and infrastructure teams at SoundCloud, and now at Weaveworks, on Weave Scope and Weave Mesh. I’ve also been working hard on Go kit, an open-source toolkit for microservices. And all the while, I’ve been active in the Go community, meeting lots of developers at meetups and conferences throughout Europe and the US, and collecting their stories—both successes and failures.

With the 6th anniversary of Go’s release in November of 2015, I thought back to that first talk. Which of those best practices have stood the test of time? Which have become outmoded or counterproductive? Are there any new practices that have emerged? In March, I had the opportunity to give a talk at QCon London where I reviewed the best practices from 2014 and took a look at how Go has evolved in 2016. Here’s the meat of that talk.

I’ve highlighted the key takeaways as linkable Top Tips.

  Top Tip — Use Top Tips to level up your Go game.

And a quick table of contents…

  1. Development environment
  2. Repository structure
  3. Formatting and style
  4. Configuration
  5. Program design
  6. Logging and instrumentation
  7. Testing
  8. Dependency management
  9. Build and deploy
  10. Conclusion

Go has development environment conventions centered around the GOPATH. In 2014 I advocated strongly for a single global GOPATH. My positioned has softened a bit. I still think that’s the best idea, all else equal, but depending on your project or team, other things may make sense, too.

If you or your organization produces primarily binaries, you might find some advantages with a per-project GOPATH. There’s a new tool, gb, from Dave Cheney and contributors, which replaces the standard go tooling for this use-case. A lot of people are reporting a lot of success with it.

Some Go developers use a two-entry GOPATH, e.g. $HOME/go/external:$HOME/go/internal. The go tool has always known how to deal with this: go get will fetch into the first path, so it can be useful if you need strict separation of third-party vs. internal code.

One thing I’ve noticed some developers forget to do: put GOPATH/bin into your PATH. This allows you to easily run binaries you get via go get, and makes the (preferred) go install mechanism of building code easier to work with. No reason not to do it.

  Top Tip — Put $GOPATH/bin in your $PATH, so installed binaries are easily accessible.

Regarding editors and IDEs, there’s been a lot of steady improvement. If you’re a vim warrior, life has never been better: thanks to the tireless and extremely capable efforts of Fatih Arslan, the vim-go plugin is in an absolutely exceptional state, best-in-class. I’m not as familiar with emacs, but Dominik Honnef’sgo-mode.el is still the big kahuna there.

Moving up the stack, lots of folks are still using and having success with Sublime Text + GoSublime. And it’s hard to beat the speed. But more attention seems to be paid lately to the Electron-powered editors. Atom + go-plus has many fans, especially those developers that have to frequently switch languages to JavaScript. The dark horse has been Visual Studio Code + vscode-go, which, while slower than Sublime Text, is noticably faster than Atom, and has excellent default support for important-to-me features, like click-to-definition. I’ve been using it daily for about half a year now, after being introduced to it by Thomas Adam. Lots of fun.

In terms of full IDEs, the purpose-built LiteIDE has been receiving regular updates and certainly has its share of fans. And the IntelliJ Go plugin has been consistently improving as well.

Update: Ben Johnson has written an excellent article titled Standard Package Layout with great advice for typical line-of-business applications.

Update: Tim Hockin’s go-build-template, adapted slightly, has proven to be a better general model. I’ve adapted this section since its original publication.

We’ve had a lot of time for projects to mature, and some patterns have emerged. While I believe there is no single best repo structure, I think there is a good general model for many types of projects. It’s especially useful for projects that provide both binaries and libraries, or combine Go code with other, non-Go assets.

The basic idea is to have two top-level directories, pkg and cmd. Underneath pkg, create directories for each of your libraries. Underneath cmd, create directories for each of your binaries. All of your Go code should live exclusively in one of these locations.


All of your artifacts remain go gettable. The paths may be slightly longer, but the nomenclature is familiar to other Go developers. And you have space and isolation for non-Go assets. For example, Javascript can live in a client or ui subdirectory. Dockerfiles, continuous integration configs, or other build helpers can live in the project root or in a build subdirectory. And runtime configuration like Kubernetes manifests can have a home, too.

  Top Tip — Put library code under a pkg/ subdirectory. Put binaries under a cmd/ subdirectory.

Of course, you’ll still use fully-qualified import paths. That is, the main.go in cmd/foosrv should import "github.com/peterbourgon/foo/pkg/fs". And beware of the ramifications of including a vendor dir for downstream users.

  Top Tip — Always use fully-qualified import paths. Never use relative imports.

This little bit of structure makes us play nice in the broader ecosystem, and hopefully continues to ensure our code is easy to consume.

Things have stayed largely the same here. This is one area that Go has gotten quite right, and I really appreciate the consensus in the community and stability in the language.

The Code Review Comments are great, and should be the minimum set of critera you enforce during code review. And when there are disputes or inconsistencies in names, Andrew Gerrand’s idiomatic naming conventions are a great set of guidelines.

And in terms of tooling, things have only gotten better. You should configure your editor to invoke gofmt—or, better, goimports—on save. (At this point, I hope that’s not in any way controversial.) The go vet tool produces (almost!) no false positives, so you might consider making it part of your precommit hook. And check out the excellent gometalinter for linting concerns. This can produce false positives, so it’s not a bad idea to encode your own conventions somehow.

Configuration is the surface area between the runtime environment and the process. It should be explicit and well-documented. I still use and recommend package flag, but I admit at this point I wish it were less esoteric. I wish it had standard, getopts-style long- and short-form argument syntax, and I wish its usage text were much more compact.

12-factor apps encourage you to use environment vars for configuration, and I think that’s fine, provided each var is also defined as a flag. Explicitness is important: changing the runtime behavior of an application should happen in ways that are discoverable and documented.

I said it in 2014 but I think it’s important enough to say again: define and parse your flags in func main. Only func main has the right to decide the flags that will be available to the user. If your library code wants to parameterize its behavior, those parameters should be part of type constructors. Moving configuration to package globals has the illusion of convenience, but it’s a false economy: doing so breaks code modularity, makes it more difficult for developers or future maintainers to understand dependency relationships, and makes writing independent, parallelizable tests much more difficult.

  Top Tip — Only func main has the right to decide which flags are available to the user.

I think there’s a great opportunity for a well-scoped flags package to emerge from the community, combining all of these characteristics. Maybe it already exists; if so, please let me know. I’d certainly use it.

In the talk, I used configuration as a jumping-off point, to discuss a few other issues of program design. (I didn’t cover this in the 2014 version.) To start, let’s take a look at constructors. If we are properly parameterizing all of our dependencies, our constructors can get quite large.

foo, err := newFoo(
    100 * time.Millisecond,
if err != nil {
defer foo.close()

Sometimes this kind of construction is best expressed with a config object: a struct parameter to a constructor that takes optional parameters to the constructed object. Let’s assume fooKey is a required parameter, and everything else either has a sensible default or is optional. Often, I see projects construct config objects in a sort of piecemeal way.

// Don't do this.
cfg := fooConfig{}
cfg.Bar = bar
cfg.Period = 100 * time.Millisecond
cfg.Output = nil

foo, err := newFoo(*fooKey, cfg)
if err != nil {
defer foo.close()

But it’s considerably nicer to leverage so-called struct initialization syntax to construct the object all at once, in a single statement.

// This is better.
cfg := fooConfig{
    Bar:    bar,
    Period: 100 * time.Millisecond,
    Output: nil,

foo, err := newFoo(*fooKey, cfg)
if err != nil {
defer foo.close()

No statements go by where the object is in an intermediate, invalid state. And all of the fields are nicely delimited and indented, mirroring the fooConfig definition.

Notice we construct and then immediately use the cfg object. In this case we can save another degree of intermediate state, and another line of code, by inlining the struct declaration into the newFoo constructor directly.

// This is even better.
foo, err := newFoo(*fooKey, fooConfig{
    Bar:    bar,
    Period: 100 * time.Millisecond,
    Output: nil,
if err != nil {
defer foo.close()


  Top Tip — Use struct literal initialization to avoid invalid intermediate state. Inline struct declarations where possible.

Let’s turn to the subject of sensible defaults. Observe that the Output parameter is something that can take a nil value. For the sake of argument, assume it’s an io.Writer. If we don’t do anything special, when we want to use it in our foo object, we’ll have to first perform a nil check.

func (f *foo) process() {
    if f.Output != nil {
        fmt.Fprintf(f.Output, "start\n")
    // ...

That’s not great. It’s much safer, and nicer, to be able to use output without having to check it for existence.

func (f *foo) process() {
     fmt.Fprintf(f.Output, "start\n")
     // ...

So we should provide a usable default here. With interface types, one good way is to pass something that provides a no-op implementation of the interface. And it turns out that the stdlib ioutil package comes with a no-op io.Writer, called ioutil.Discard.

  Top Tip — Avoid nil checks via default no-op implementations.

We could pass that into the fooConfig object, but that’s still fragile. If the caller forgets to do it at the callsite, we’ll still end up with a nil parameter. So, instead, we can create a sort of safety within the constructor.

func newFoo(..., cfg fooConfig) *foo {
    if cfg.Output == nil {
        cfg.Output = ioutil.Discard
    // ...

This is just an application of the Go idiom make the zero value useful. We allow the zero value of the parameter (nil) to yield good default behavior (no-op).

  Top Tip — Make the zero value useful, especially in config objects.

Let’s revisit the constructor. The parameters fooKey, bar, period, output are all dependencies. The foo object depends on each of them in order to start and run successfully. If there’s a single lesson I’ve learned from writing Go code in the wild and observing large Go projects on a daily basis for the past six years, it is this: make dependencies explicit.

  Top Tip — Make dependencies explicit!

An incredible amount of maintenance burden, confusion, bugs, and unpaid technical debt can, I believe, be traced back to ambiguous or implicit dependencies. Consider this method on the type foo.

func (f *foo) process() {
    fmt.Fprintf(f.Output, "start\n")
    result := f.Bar.compute()
    log.Printf("bar: %v", result) // Whoops!
    // ...

fmt.Printf is self-contained and doesn’t affect or depend on global state; in functional terms, it has something like referential transparency. So it is not a dependency. Obviously, f.Bar is a dependency. And, interestingly, log.Printf acts on a package-global logger object, it’s just obscured behind the free function Printf. So it, too, is a dependency.

What do we do with dependencies? We make them explicit. Because the process method prints to a log as part of its work, either the method or the foo object itself needs to take a logger object as a dependency. For example, log.Printf should become f.Logger.Printf.

func (f *foo) process() {
    fmt.Fprintf(f.Output, "start\n")
    result := f.Bar.compute()
    f.Logger.Printf("bar: %v", result) // Better.
    // ...

We’re conditioned to think of certain classes of work, like writing to a log, as incidental. So we’re happy to leverage helpers, like package-global loggers, to reduce the apparent burden. But logging, like instrumentation, is often crucial to the operation of a service. And hiding dependencies in the global scope can and does come back to bite us, whether it’s something as seemingly benign as a logger, or perhaps another, more important, domain-specific component that we haven’t bothered to parameterize. Save yourself the future pain by being strict: make all your dependencies explicit.

  Top Tip — Loggers are dependencies, just like references to other components, database handles, commandline flags, etc.

Of course, we should also be sure to take a sensible default for our logger.

func newFoo(..., cfg fooConfig) *foo {
    // ...
    if cfg.Logger == nil {
        cfg.Logger = log.New(ioutil.Discard, ...)
    // ...

Update: for more detail on this and the subject of magic, see the June 2017 blog post on a theory of modern Go.

To speak about the problem generally for a moment: I’ve had a lot more production experience with logging, which has mostly just increased my respect for the problem. Logging is expensive, more expensive than you think, and can quickly become the bottleneck of your system. I wrote more extensively on the subject in a separate blog post, but to re-cap:

  • Log only actionable information, which will be read by a human or a machine
  • Avoid fine-grained log levels — info and debug are probably enough
  • Use structured logging — I’m biased, but I recommend go-kit/log
  • Loggers are dependencies!

Where logging is expensive, instrumentation is cheap. You should be instrumenting every significant component of your codebase. If it’s a resource, like a queue, instrument it according to Brendan Gregg’s USE method: utilization, saturation, and error count (rate). If it’s something like an endpoint, instrument it according to Tom Wilkie’s RED method: request count (rate), error count (rate), and duration.

If you have any choice in the matter, Prometheus is probably the instrumentation system you should be using. And, of course, metrics are dependencies, too!

Let’s use loggers and metrics to pivot and address global state more directly. Here are some facts about Go:

  • log.Print uses a fixed, global log.Logger
  • http.Get uses a fixed, global http.Client
  • http.Server, by default, uses a fixed, global log.Logger
  • database/sql uses a fixed, global driver registry
  • func init exists only to have side effects on package-global state

These facts are convenient in the small, but awkward in the large. That is, how can we test the log output of components that use the fixed global logger? We must redirect its output, but then how can we test in parallel? Just don’t? That seems unsatisfactory. Or, if we have two independent components both making HTTP requests with different requirements, how do we manage that? With the default global http.Client, it’s quite difficult. Consider this example.

func foo() {
    resp, err := http.Get("http://zombo.com")
    // ...

http.Get calls on a global in package http. It has an implicit global dependency. Which we can eliminate pretty easily.

func foo(client *http.Client) {
    resp, err := client.Get("http://zombo.com")
    // ...

Just pass an http.Client as a parameter. But that is a concrete type, which means if we want to test this function we also need to provide a concrete http.Client, which likely forces us to do actual HTTP communication. Not great. We can do one better, by passing an interface which can Do (execute) HTTP requests.

type Doer interface {
    Do(*http.Request) (*http.Response, error)

func foo(d Doer) {
    req, _ := http.NewRequest("GET", "http://zombo.com", nil)
    resp, err := d.Do(req)
    // ...

http.Client satisfies our Doer interface automatically, but now we have the freedom to pass a mock Doer implementation in our test. And that’s great: a unit test for func foo is meant to test only the behavior of foo, it can safely assume that the http.Client is going to work as advertised.

Speaking of testing…

In 2014, I reflected on our experience with various testing frameworks and helper libraries, and concluded that we never found a great deal of utility in any of them, recommending the stdlib’s approach of plain package testing with table-based tests. Broadly, I still think this is the best advice. The important thing to remember about testing in Go is that it is just programming. It is not sufficiently different from other programming that it warrants its own metalanguage. And so package testing continues to be well-suited to the task.

TDD/BDD packages bring new, unfamiliar DSLs and control structures, increasing the cognitive burden on you and your future maintainers. I haven’t personally seen a codebase where that cost has paid off in benefits. Like global state, I believe these packages represent a false economy, and more often than not are the product of cargo-culting behaviors from other languages and ecosystems. When in Go, do as Gophers do: we already have a language for writing simple, expressive tests—it’s called Go, and you probably know it pretty well.

With that said, I do recognize my own context and biases. Like with my opinions on the GOPATH, I’ve softened a bit, and defer to those teams and organizations for whom a testing DSL or framework may make sense. If you know you want to use a package, go for it. Just be sure you’re doing it for well-defined reasons.

Another incredibly interesting topic has been designing for testing. Mitchell Hashimoto recently gave a great talk on the subject here in Berlin (SpeakerDeck, YouTube) which I think should be required viewing.

In general, the thing that seems to work the best is to write Go in a generally functional style, where dependencies are explicitly enumerated, and provided as small, tightly-scoped interfaces whenever possible. Beyond being good software engineering discipline in itself, it feels like it automatically optimizes your code for easy testing.

  Top Tip — Use many small interfaces to model dependencies.

As in the http.Client example just above, remember that unit tests should be written to test the thing being tested, and nothing more. If you’re testing a process function, there’s no reason to also test the HTTP transport the request came in on, or the path on disk the results get written to. Provide inputs and outputs as fake implementations of interface parameters, and focus on the business logic of the method or component exclusively.

  Top Tip — Tests only need to test the thing being tested.

Ever the hot topic. In 2014, things were nascent, and about the only concrete advice I could give was to vendor. That advice still holds today: vendoring is still the solution to dependency management for binaries. In particular, the GO15VENDOREXPERIMENT and its concomittant vendor/ subdirectory have become default in Go 1.6. So you’ll be using that layout. And, thankfully, the tools have gotten a lot better. Some I can recommend:

  • FiloSottile/gvt takes a minimal approach, basically just extracting the vendor subcommand from the gb tool so it can be used standalone.
  • Masterminds/glide takes a maximal approach, attempting to recreate the feel and finish of a fully-featured dependency management tool using vendoring under the hood.
  • kardianos/govendor sits in the middle, providing probably the richest interface to vendoring-specific nouns and verbs, and is driving the conversation on the manifest file.
  • constabulary/gb abandons the go tooling altogether in favor of a different repository layout and build mechanism. Great if you produce binaries and can mandate the build environment, e.g. in a corporate setting.
  Top Tip — Use a top tool to vendor dependencies for your binary.

A big caveat for libraries. In Go, dependency management is a concern of the binary author. Libraries with vendored dependencies are very difficult to use; so difficult that it is probably better said that they are impossible to use. There are many corner cases and edge conditions that have played out in the months since vendoring was officially introduced in 1.5. (You can dig in to one of theseforum posts if you’re particularly interested in the details.) Without getting too deep in the weeds, the lesson is clear: libraries should never vendor dependencies.

  Top Tip — Libraries should never vendor their dependencies.

You can carve out an exception for yourself if your library has hermetically sealed its dependencies, so that none of them escape to the exported (public) API layer. No dependent types referenced in any exported functions, method signatures, structures—anything.

If you have the common task of maintaining an open-source repository that contains both binaries and libraries, unfortunately, you are stuck between a rock and a hard place. You want to vendor your deps for your binaries, but you shouldn’t vendor them for your libraries, and the GO15VENDOREXPERIMENT doesn’t admit this level of granularity, from what appears to me to be regrettable oversight.

Bluntly, I don’t have an answer to this. The etcd folks have hacked together a solution using symlinks which I cannot in good faith recommend, as symlinks are not well-supported by the go toolchain and break entirely on Windows. That this works at all is more a happy accident than any consequence of design. I and others have raised all of these concerns to the core team, and I hope something will happen in the near term.

Regarding building, one important lesson learned, with a hat tip to Dave Cheney: prefer go install to go build. The install verb caches build artifacts from dependencies in $GOPATH/pkg, making builds faster. It also puts binaries in $GOPATH/bin, making them easier to find and invoke.

  Top Tip — Prefer go install to go build.

If you produce a binary, don’t be afraid to try out new build tools like gb, which may significantly reduce your cognitive burden. Conversely, remember that since Go 1.5 cross-compilation is built-in; just set the appropriate GOOS and GOARCH environment variables, and invoke the appropriate go command. So there’s no need for extra tools here anymore.

Regarding deployment, we Gophers have it pretty easy compared to languages like Ruby or Python, or even the JVM. One note: if you deploy in containers, follow the advice of Kelsey Hightower and do it FROM scratch. Go gives us this incredible opportunity; it’s a shame not to use it.

As more general advice, think carefully before choosing a platform or orchestration system—if you even choose one at all. Likewise for jumping onto the microservices bandwagon. An elegant monolith, deployed as an AMI to an autoscaling EC2 group, is a very productive setup for small teams. Resist, or at least carefully consider, the hype.

The Top Tips:

  1. Put $GOPATH/bin in your $PATH, so installed binaries are easily accessible.  link
  2. Put library code under a pkg/ subdirectory. Put binaries under a cmd/ subdirectory.  link
  3. Always use fully-qualified import paths. Never use relative imports.  link
  4. Defer to Andrew Gerrand’s naming conventions.  link
  5. Only func main has the right to decide which flags are available to the user.  link
  6. Use struct literal initialization to avoid invalid intermediate state.  link
  7. Avoid nil checks via default no-op implementations.  link
  8. Make the zero value useful, especially in config objects.  link
  9. Make dependencies explicit!  link
  10. Loggers are dependencies, just like references to other components, database handles, commandline flags, etc.  link
  11. Use many small interfaces to model dependencies.  link
  12. Tests only need to test the thing being tested.  link
  13. Use a top tool to vendor dependencies for your binary.  link
  14. Libraries should never vendor their dependencies.  link
  15. Prefer go install to go build.  link

Go has always been a conservative language, and its maturity has brought relatively few surprises and effectively no major changes. Consequently, and predictably, the community also hasn’t dramatically shifted its stances on what’s considered best practice. Instead, we’ve seen a reification of tropes and proverbs that were reasonably well-known in the early years, and a gradual movement “up the stack” as design patterns, libraries, and program structures are explored and transformed into idiomatic Go.

Here’s to another 6 years of fun and productive Go programming. 🏌

Go back to my website, or follow me on Twitter.

Let's block ads! (Why?)

Read the whole story
16 days ago
18 days ago
Share this story

Bouwvakkers agile aan het scrummen


Ze zweren erbij. De bouwvakkers van Van Gelderen BV werken sinds enige tijd agile. Door het toepassen van scrum is het bedrijf wendbaar geworden: “We kennen geen dichtgetimmerde organisatiestructuur meer.”

“Vroeger zag ik er enorm tegenop om een huis te bouwen, maar sinds we scrummend werken niet meer”, zegt bouwvakker Ferry. “Nu werken we in sprints met kleine zelfsturende teams om binnen twee weken bijvoorbeeld een muurtje te bouwen. Dat is een haalbaar doel en daardoor beperken we de risico’s. Door dit framework werken we nu voortdurend met een realistische planning. De winst? Transparantie.”

Ferry’s collega Michael sluit zich daarbij aan. “Dankzij de agile mindset van ons bedrijf blijft het eindresultaat open, in tegenstelling tot de bouwbedrijven die nog op de traditionele manier werken. Door iedere twee weken weer opnieuw het doel voor de volgende twee weken te bepalen, kan een project leiden tot een huis, een wolkenkrabber maar bijvoorbeeld ook een molen. Dat weet niemand van tevoren. Klanten stellen dat anno 2018 enorm op prijs, verwachten we.”

Michael haalt een klein boekje uit zijn kontzak: “Hierin staan de 12 principes van agile.” Hij leest voor: “Eenvoud, de kunst van het maximaliseren van het werk dat niet gedaan wordt, is essentieel.” De bouwvakker laat een stilte vallen. “Het geeft me kracht om dit altijd bij me te hebben.”

Directeur Niek van Gelderen is tevreden over de cultuuromslag: “Wij werkten tot voor kort lean, maar dat sloeg nergens op. Maximale toegevoegde waarde voor de klant, m’n reet. We elimineerden wel verspilling, maar de processen en hulpmiddelen stonden nog altijd boven mensen en hun onderlinge interactie. Dan kun je natuurlijk nog geen konijnenhok bouwen.”

Read the whole story
25 days ago
29 days ago
Share this story

Rainbow Deploys with Kubernetes


or: how you can deploy services to Kubernetes that require long periods of draining.

If you want to jump directly to the technical solution, check out the project repo. Below is a short story about how we got to this solution.

In an ideal cloud native world, your services will be stateless so deploys and restarts aren’t disruptive. Unfortunately in the real world, sometimes you have stateful services and can’t realistically turn them stateless.

At Olark, the service that powers chat.olark.com is stateful. Each user’s browser establishes a websockets connection to the backend, which in turn establishes an XMPP connection to our XMPP server. If a backend service instance goes away, all the users who have established XMPP connections via that server will be disconnected and will have to reconnect. While that’s not the end of the world, it’s not a great experience. Also, if it happens to everybody at once, it causes a huge load spike. If we deploy to Kubernetes the traditional way, the rolling deploy will restart all backends, which will cause all logged-in users to reconnect. We had to find a better way.

The old way

Before chat.olark.com was running in Kubernetes, we used up, which would fork new workers each time new code was deployed. This is a common idiom for no-downtime deploys in a variety of languages. We could deploy as often as we want, and the old workers could stick around for a couple days to serve the existing XMPP connections. Once the users had (mostly) switched to the new backend, up would clean up the old workers. We couldn’t do the same thing inside of a container without a ton of trickery and hacks. Containers are meant to be immutable once they’re deployed, and hot-loading code is simply not advised.

First try

Our first attempt to solve this problem was effectively to port “the old way” to use Kubernetes primitives. We used service-loadbalancer to stick sessions to backends and we turned up the terminationGracePeriodSeconds to several hours. This appeared to work at first, but it turned out that we lost a lot of connections before the client closed the connection. We decided that we were probably relying on behavior that wasn’t guaranteed anyways, so we scrapped this plan.

Blue/Green Deploys

Our second thought was to build a Blue/Green deployment in Kubernetes. It’s a fairly common strategy outside of Kube, and isn’t that hard to implement. We would have 2 Deployments, lets call them chat-olark-com-blue and chat-olark-com-green. When you want to deploy, you just roll out the least-recently-deployed and switch the Service to point at that Deployment once it’s healthy. Rolling back is easy: just switch the service back to the other color. There is a downside: with only two colors, we can only deploy about once per day. It takes 24-48h for connections to naturally burn down, and we don’t want to force too many reconnects. This means that every time we use one of the deployments, we need to wait at least a day before we deploy to the other one.

But wait! We’re in Kubernetes, so lets just make a ton of colors! We have all of ROY-G-BIV to work with here, so lets go crazy. This strategy is fine in principle, but managing a bunch of static deployment colors is cumbersome. Plus, each deployment currently requires 16 pods, so running enough to allow us to deploy 4x/day means we need 8 colors (4 per day, plus a day delay) and we’ll be running 128 (2G, 1CPU) pods all the time, even if we only deploy once all week. There’s gotta be a better way!

🌈 Rainbow Deploys 🌈

It turns out that we were almost there with the original Rainbow Deploy idea. The key was simple: instead of using fixed colors, we used git hashes. Instead of a Deployment called chat-olark-com-$COLOR we deploy chat-olark-com-$SHA. As a bonus, since the first six characters of a git sha are also a valid hex color, the name still makes sense. You might even find a new favorite color!

Using this technique, a deploy goes like this:

  • Create a new deployment with the pattern chat-olark-com-$NEW_SHA.
  • When the pods are ready, switch the service to point at chat-olark-com-$NEW_SHA.
    • If you need to roll back, point the service back at chat-olark-com-$OLD_SHA.
  • Once connections have burned down, delete the old deployment.
    • Any of the (few) remaining users will reconnect to a newer backend.

I made a demo repo to showcase how this works.

We’ve been deploying chat.olark.com this way since June, 2017 via Gitlab pipelines. This deployment strategy has been far easier to use and far more reliable than our previous deployments. One day we will hopefully be able to avoid connection draining, but this has proved to be a step in the right direction.

Clean up

We still have one unsolved issue with this deployment strategy: how to clean up the old deployments when they’re no longer serving (much) traffic. So far we haven’t found a good way of detecting a lightly used deployment, so we’ve been cleaning them up manually every once in a while. The idea is to wait until the number of connections are low enough that it will be minimally disruptive. It would be nice to automate this, but it’s actually somewhat difficult to detect when the time is right. Hopefully this will be a future post.

The future

I would love to see something like this end up as a native Deployment Strategy. It ought to be possible to make an Immutable deployment method where pods only get created but the old ones aren’t destroyed immediately. It’d be even better if there were some way to define when old pods would be cleaned up. A lifecycle hook or signal may suffice here, to indicate to the pod when it’s no longer receiving production traffic and should shut down when ready.

Let's block ads! (Why?)

Read the whole story
34 days ago
Share this story

Announcing Go Support for AWS Lambda

1 Share

This post courtesy of Paul Maddox, Specialist Solutions Architect (Developer Technologies).

Today, we’re excited to announce Go as a supported language for AWS Lambda.

As someone who’s done their fair share of Go development (recent projects include AWS SAM Local and GoFormation), this is a release I’ve been looking forward to for a while. I’m going to take this opportunity to walk you through how it works by creating a Go serverless application, and deploying it to Lambda.


This post assumes that you already have Go installed and configured on your development machine, as well as a basic understanding of Go development concepts. For more details, see https://golang.org/doc/install.

Creating an example Serverless application with Go

Lambda functions can be triggered by variety of event sources:

  • Asynchronous events (such as an object being put in an Amazon S3 bucket)
  • Streaming events (for example, new data records on an Amazon Kinesis stream)
  • Synchronous events (manual invocation, or HTTPS request via Amazon API Gateway)

As an example, you’re going to create an application that uses an API Gateway event source to create a simple Hello World RESTful API. The full source code for this example application can be found on GitHub at: https://github.com/aws-samples/lambda-go-samples.

After the application is published, it receives a name via the HTTPS request body, and responds with “Hello .” For example:

$ curl -XPOST -d "Paul" "https://my-awesome-api.example.com/"
Hello Paul

To implement this, create a Lambda handler function in Go.

Import the github.com/aws/aws-lambda-go package, which includes helpful Go definitions for Lambda event sources, as well as the lambda.Start() method used to register your handler function.

Start by creating a new project directory in your $GOPATH, and then creating a main.go file that contains your Lambda handler function:

package main

import (

var (
 // ErrNameNotProvided is thrown when a name is not provided
 ErrNameNotProvided = errors.New("no name was provided in the HTTP body")

// Handler is your Lambda function handler
// It uses Amazon API Gateway request/responses provided by the aws-lambda-go/events package,
// However you could use other event sources (S3, Kinesis etc), or JSON-decoded primitive types such as 'string'.
func Handler(request events.APIGatewayProxyRequest) (events.APIGatewayProxyResponse, error) {

 // stdout and stderr are sent to AWS CloudWatch Logs
 log.Printf("Processing Lambda request %s\n", request.RequestContext.RequestID)

 // If no name is provided in the HTTP request body, throw an error
 if len(request.Body) < 1 {
  return events.APIGatewayProxyResponse{}, ErrNameNotProvided

 return events.APIGatewayProxyResponse{
  Body:       "Hello " + request.Body,
  StatusCode: 200,
 }, nil


func main() {

The lambda.Start() method takes a handler, and talks to an internal Lambda endpoint to pass Invoke requests to the handler. If a handler does not match one of the supported types, the Lambda package responds to new invocations served by an internal endpoint with an error message such as:

json: cannot unmarshal object into Go value of type int32: UnmarshalTypeError

The lambda.Start() method blocks, and does not return after being called, meaning that it’s suitable to run in your Go application’s main entry point.

More detail on AWS Lambda function handlers with Go

A handler function passed to lambda.Start() must follow these rules:

  • It must be a function.
  • The function may take between 0 and 2 arguments.
    • If there are two arguments, the first argument must implement context.Context.
  • The function may return between 0 and 2 values.
    • If there is one return value, it must implement error.
    • If there are two return values, the second value must implement error.

The github.com/aws/aws-lambda-go library automatically unmarshals the Lambda event JSON to the argument type used by your handler function. To do this, it uses Go’s standard encoding/json package, so your handler function can use any of the standard types supported for unmarshalling (or custom types containing those):

  • bool, for JSON booleans
  • float64, for JSON numbers
  • string, for JSON strings
  • []interface{}, for JSON arrays
  • map[string]interface{}, for JSON objects
  • nil, for JSON null

For example, your Lambda function received a JSON event payload like the following:

  "id": 12345,
  "value": "some-value"

It should respond with a JSON response that looks like the following:

  "message": "processed request ID 12345",
  "ok": true

You could use a Lambda handler function that looks like the following:

package main

import (

type Request struct {
  ID        float64 `json:"id"`
  Value     string  `json:"value"`

type Response struct {
  Message string `json:"message"`
  Ok      bool   `json:"ok"`

func Handler(request Request) (Response, error) {
 return Response{
  Message: fmt.Sprintf("Processed request ID %f", request.ID),
  Ok:      true,
 }, nil

func main() {

For convenience, the github.com/aws/aws-lambda-go package provides event sources that you can also use in your handler function arguments. It also provides return values for common sources such as S3, Kinesis, Cognito, and the API Gateway event source and response objects that you’re using in the application example.

Adding unit tests

To test that the Lambda handler works as expected, create a main_test.go file containing some basic unit tests.

package main_test

import (
 main "github.com/aws-samples/lambda-go-samples"

func TestHandler(t *testing.T) {
 tests := []struct {
  request events.APIGatewayProxyRequest
  expect  string
  err     error
    // Test that the handler responds with the correct response
    // when a valid name is provided in the HTTP body
    request: events.APIGatewayProxyRequest{Body: "Paul"},
    expect:  "Hello Paul",
    err:     nil,
    // Test that the handler responds ErrNameNotProvided
    // when no name is provided in the HTTP body
    request: events.APIGatewayProxyRequest{Body: ""},
    expect:  "",
    err:     main.ErrNameNotProvided,

  for _, test := range tests {
   response, err := main.Handler(test.request) 
   assert.IsType(t, test.err, err)
   assert.Equal(t, test.expect, response.Body)

Run your tests:

$ go test
ok      github.com/awslabs/lambda-go-example    0.041s

Note: To make the unit tests more readable, this example uses a third-party library (https://github.com/stretchr/testify). This allows you to describe the test cases in a more natural format, making them more maintainable for other people who may be working in the code base.

Build and deploy

As Go is a compiled language, build the application and create a Lambda deployment package. To do this, build a binary that runs on Linux, and zip it up into a deployment package.

To do this, we need to build a binary that will run on Linux, and ZIP it up into a deployment package.

$ GOOS=linux go build -o main
$ zip deployment.zip main

The binary doesn’t need to be called main, but the name must match the Handler configuration property of the deployed Lambda function.

The deployment package is now ready to be deployed to Lambda. One deployment method is to use the AWS CLI. Provide a valid Lambda execution role for  –role.

$ aws lambda create-function \
--region us-west-1 \
--function-name HelloFunction \
--zip-file fileb://./deployment.zip \
--runtime go1.x \
--tracing-config Mode=Active
--role arn:aws:iam:::role/ \
--handler main

From here, configure the invoking service for your function, in this example API Gateway, to call this function and provide the HTTPS frontend for your API. For more information about how to do this in the API Gateway console, see Create an API with Lambda Proxy Integration. You could also do this in the Lambda console by assigning an API Gateway trigger.

Lambda Console Designer Trigger selection

Then, configure the trigger:

  • API name: lambda-go
  • Deployment stage: prod
  • Security: open

This results in an API Gateway endpoint that you can test.

Lambda Console API Gateway configuration

Now, you can use cURL to test your API:

$ curl -XPOST -d "Paul" https://u7fe6p3v64.execute-api.us-east-1.amazonaws.com/prod/main
Hello Paul

Doing this manually is fine and works for testing and exploration. If you were doing this for real, you’d want to automate this process further. The next section shows how to add a CI/CD pipeline to this process to build, test, and deploy your serverless application as you change your code.

Automating tests and deployments

Next, configure AWS CodePipeline and AWS CodeBuild to build your application automatically and run all of the tests. If it passes, deploy your application to Lambda.

The first thing you need to do is create an AWS Serverless Application Model (AWS SAM) template in your source repository. SAM provides an easy way to deploy Serverless resources, such as Lambda functions, APIs, and other event sources, as well as all of the necessary IAM permissions, etc. You can also include any valid AWS CloudFormation resources within your SAM template, such as a Kinesis stream, or an Amazon DynamoDB table. They are deployed alongside your Serverless application.

Create a file called template.yml in your application repository with the following contents:

AWSTemplateFormatVersion: 2010-09-09
Transform: AWS::Serverless-2016-10-31
    Type: AWS::Serverless::Function
      Handler: main
      Runtime: go1.x
      Tracing: Active
          Type: Api
            Path: /
            Method: post

The above template instructs SAM to deploy a Lambda function (called HelloFunction in this case), with the Go runtime (go1.x), and also an API configured to pass HTTP POST requests to your Lambda function. The Handler property defines which binary in the deployment package needs to be executed (main in this case).

You’re going to use CodeBuild to run your tests, build your Go application, and package it. You can tell CodeBuild how to do all of this by creating a buildspec.yml file in your repository containing the following:

version: 0.2
    # This S3 bucket is used to store the packaged Lambda deployment bundle.
    # Make sure to provide a valid S3 bucket name (it must exist already).
    # The CodeBuild IAM role must allow write access to it.
    S3_BUCKET: "your-s3-bucket"
    PACKAGE: "github.com/aws-samples/lambda-go-samples"

      # AWS Codebuild Go images use /go for the $GOPATH so copy the
      # application source code into that directory structure.
      - mkdir -p "/go/src/$(dirname ${PACKAGE})"
      - ln -s "${CODEBUILD_SRC_DIR}" "/go/src/${PACKAGE}"
      # Print all environment variables (handy for AWS CodeBuild logs)
      - env
      # Install golint
      - go get -u github.com/golang/lint/golint

      # Make sure we're in the project directory within our GOPATH
      - cd "/go/src/${PACKAGE}"
      # Fetch all dependencies
      - go get ./...
      # Ensure that the code passes all lint tests
      - golint -set_exit_status
      # Check for common Go problems with 'go vet'
      - go vet .
      # Run all tests included with the application
      - go test .

      # Build the go application
      - go build -o main
      # Package the application with AWS SAM
      - aws cloudformation package --template-file template.yml --s3-bucket ${S3_BUCKET} --output-template-file packaged.yml

  - packaged.yml

This buildspec file does the following:

  • Sets up your GOPATH, ready for building
  • Runs golint to make sure that any committed code matches the Go style and formatting specification
  • Runs any unit tests present (via go test)
  • Builds your application binary
  • Packages the binary into a Lambda deployment package and uploads it to S3

For more details about buildspec files, see the Build Specification Reference for AWS CodeBuild.

Your project directory should now contain the following files:

$ tree
├── buildspec.yml    (AWS CodeBuild configuration file)
├── main.go          (Our application)
├── main_test.go     (Unit tests)
└── template.yml     (AWS SAM template)
0 directories, 4 files

You’re now ready to set up your automated pipeline with CodePipeline.

Create a new pipeline

Get started by navigating to the CodePipeline console. You need to give your new pipeline a name, such as HelloService.

Next, select the source repository in which your application code is located. CodePipeline supports either AWS CodeCommit, GitHub.com, or S3. To use the example GitHub.com repository mentioned earlier in this post, fork it into your own GitHub.com account or create a new CodeCommit repository and clone it into there. Do this first before selecting a source location.

CodePipeline Source location configuration

Tell CodePipeline to use CodeBuild to test, build, and package your application using the buildspec.yml file created earlier:

CodePipeline Console Build Configuration

Important: CodeBuild needs read/write access to the S3 bucket referenced in the buildspec.yml file that you wrote. It places the packaged Lambda deployment package into S3 after the tests and build are completed. Make sure that the CodeBuild service role created or provided has the correct IAM permissions. For more information, see Writing IAM Policies: How to grant access to an Amazon S3 bucket. If you don’t do this, CodeBuild fails.

Finally, set up the deployment stage of your pipeline. Select AWS CloudFormation as the deployment method, and the Create or replace a change set mode (as required by SAM). To deploy multiple environments (for example, staging, production), add additional deployment stages to your pipeline after it has been created.

CodePipeline Console Deploy configuration

After being created, your pipeline takes a few minutes to initialize, and then automatically triggers. You can see the latest commit in your version control system make progress through the build and deploy stages of your pipeline.

You do not need to configure anything further to automatically run your pipeline on new version control commits. It already automatically triggers, builds, and deploys each time.

CodePipeline Console Created Pipeline

Make one final change to the pipeline, to configure the deployment stage to execute the CloudFormation changeset that it creates. To make this change, choose the Edit button on your pipeline, choose the pencil icon on the staging deployment stage, and add a new action:

CodePipeline Console Add Action

After the action is added, save your pipeline. You can test it by making a small change to your Lambda function, and then committing it back to version control. You can see your pipeline trigger, and the changes get deployed to your staging environment.

See it in Action

After a successful run of the pipeline has completed, you can navigate to the CloudFormation console to see the deployment details.

In your case, you have a CloudFormation stack deployed. If you look at the Resources tab, you see a table of the AWS resources that have been deployed.

CloudFormation Resources tab

Choose the ServerlessRestApi item link to navigate to the API Gateway console and view the details of your deployed API, including the URL,

API Gateway Stage Editor

You can use cURL to test that your Serverless application is functioning as expected:

$ curl -XPOST -d "Paul" https://y5fjgtq6dj.execute-api.us-west-1.amazonaws.com/Stage
Hello Paul

One more thing!

We are also excited to announce that AWS X-Ray can be enabled in your Lambda runtime to analyze and debug your Go functions written for Lambda. The X-Ray SDK for Go works with the Go context of your Lambda function, providing features such as AWS SDK retry visibility and one-line error capture.
x-ray console waterfall diagram
You can use annotations and metadata to capture additional information in X-Ray about your function invocations. Moreover, the SDK supports the net/http client package, enabling you to trace requests made to endpoints even if they are not X-Ray enabled.

Wrapping it up!

Support for Go has been a much-requested feature in Lambda and we are excited to be able to bring it to you. In this post, you created a basic Go-based API and then went on to create a full continuous integration and delivery pipeline that tests, builds, and deploys your application each time you make a change.

You can also get started with AWS Lambda Go support through AWS CodeStar. AWS CodeStar lets you quickly launch development projects that include a sample application, source control and release automation. With this announcement, AWS CodeStar introduced new project templates for Go running on AWS Lambda. Select one of the CodeStar Go project templates to get started. CodeStar makes it easy to begin editing your Go project code in AWS Cloud9, an online IDE, with just a few clicks.

CodeStar Go application

Excited about Go in Lambda or have questions? Let us know in the comments here, in the AWS Forums for Lambda, or find us on Twitter at @awscloud.

Let's block ads! (Why?)

Read the whole story
64 days ago
Share this story

From temptation to sextortion: Inside the fake Facebook profile industry

1 Share

Wednesday, Sept. 6, 2017, around 3 p.m. (France local time).

I’ve just uncovered the most important element of the entire investigation. It’s a photo of a group of friends on Facebook, really nothing special. However, this photo, and the comments under it, allow me to finally confirm the identity of one of the men behind the network.

Then, like in a movie, a few seconds after having taken a few screengrabs, everything disappears. A dozen of the most popular fake accounts in the network go offline.

It’s a total blackout, as if someone knows I’m getting closer to the truth.

Let’s call him “Mehdi.” His name has been popping up in my notes for months. He’s the moderator of a private Facebook group that has more than 600,000 members, and which is often used by the network’s fake profiles to drive traffic. The other moderators of the group are all fake profiles. Everything points to Mehdi.

Then, one day, I find this picture from September 2016, where he made a serious mistake.

One of Mehdi’s friends publishes a group photo and tags her friends, including Mehdi. I recognize him in the picture. But when I put my mouse cursor on Mehdi’s face, I see that he’s not tagged using his name. His face is tagged to Amandine Ponticaud, one of the biggest fake profiles in the network.

A photo published on Facebook. We see 10 young people, 5 women and 5 men. Their faces are blurred. On the right, we see Pablo and Mehdi, two of the administrators of the network.

In the comments, a guy started making fun of Mehdi. Mehdi answered back. But he did so with Amandine’s profile, not his own.

What follows is a flurry of insults between the guy and Mehdi. Mehdi finds a picture of the guy’s mother on his Facebook profile and says he’s going to use it “in his next porno post.” Remember that bait accounts use fake porno links to trap its victims.

Tired of the abuse, the guy blocks Amandine’s profile. But then Mehdi jumps back into the fray, this time under the name Léa Pierné – another fake account in the network. The guy blocks this account, and Mehdi comes back again with yet another of the network’s fake profiles, Isabelle Bekaert.

It’s clear, then, that Mehdi had, in September 2016 at least, access to these three fake profiles, which are some of the keystones of the network. He even admits to publishing “porno links.”

In fact, in the comment section of a July 2017 post by the network, these three same fake profiles were used to give the illusion that people had watched a supposed porno video.

A conversation that took place in the comments section underneath one of the network's posts. Four of the network's fake profiles write that they downloaded the alleged pornographic video linked to by the post.

The Marseille gang

Where things become interesting is when we search for Mehdi’s name on Google. Because, you see, he seems to have been doing this for quite some time. His name pops up on video game forums in France.

Since 2012, forum users have wanted to get him kicked off Facebook. Why? They said he shares “fake accounts” that publish “pictures stolen from chicks’ accounts.” In July 2012, some users banded together in a systematic campaign to flag Mehdi’s Facebook profile.

In these old forum posts, another man is also named, purported to be Mehdi’s partner. We’ll call him “Pablo.” He does seem to be Facebook friends with other people involved in the network. Mehdi and Pablo seem to come from southern France, around the city of Marseille.

By snooping a bit, I found two ads – one published by Pablo, the other by Mehdi – published on the listings website Webfrance in April 2015. In both ads, Mehdi and Pablo try to sell the same Facebook page, now defunct, which had, at the time, 280,000 subscribers. A person writes in the comments that they were scammed “three times” by Pablo, who had tried to sell him “fake accounts.”

In another ad, Pablo says he wants to “quit social media to concentrate on real life.” He says he’s selling three Facebook pages, with 280,000, 129,000 and 70,000 active subscribers. He uses an email address that includes Mehdi’s name in his ad.

Caught red-handed

Here’s where the story takes an unexpected turn. By searching for Pablo’s name on Facebook, I stumble upon a very strange page. It’s in Pablo’s name and uses his face as a profile picture.

On July 10, 2013, the page simultaneously published 373 pictures in the same public album, accessible to all. These images seem to be screengrabs from computers and mobile phones. In these screengrabs, we can see the inner workings of a sextortion ring of fake accounts.

In this album, we can see pictures of young women, some more explicit than others; anything one would need to, say, create a fake profile to scam men.

We can also see statistics of the engagement created by several Facebook pages supposedly belonging to pretty young girls.

What’s more, we see a screengrab of a Facebook chat window, where Mehdi asks a friend to make him administrator of a page. “I’m gonna scam a dude and I just told him that I was admin,” he writes. Mehdi gloats a few minutes later that the scam worked.

There’s also a screengrab of a PayPal transfer worth 500 euros ($740 CDN).

Then come a series of four incriminating screengrabs where we see – beyond doubt – a person carrying out a sextortion scam.

It’s the classic setup: make a man believe that he’s talking to a woman so that he gets naked in front of the camera, then take screengrabs of the exchange to blackmail him.

n the image, we see a Skype video conversation. The owner of the computer is chatting with a man. This man is naked and masturbating. In the small window which usually shows a Skype user what his chat partner is seeing, we see a nude woman masturbating on a bed.

However, behind the Skype window, we can also see that this user is using a computer program to display pornographic videos on Skype, to give his victim the illusion that he is interacting with a woman. In the background, we can see that this user has at least two videos of the same nude woman, which he can display in his Skype window.

I can’t be absolutely certain where these screengrabs come from. It would be very unlikely that someone could manage to fake 373 images to try and make Pablo look bad. Were these screengrabs obtained through a hack? Were they uploaded by mistake by someone working for the network? It’s impossible to know.

Still, it would be a curiously improbable coincidence that screengrabs showing the inner workings of a sextortion ring would be published to a Facebook page bearing Pablo’s name, when he seems to be at the center of a network which does exactly that type of activity.

Pablo and Mehdi both ignored multiple attempts to contact them. However, my colleague Marie-Eve talked to two (real) young women who had participated in the network’s activities by sharing posts from fake profiles. Both confirmed that the network is used to make money. One of them said that she made 10,000 euros ($14,800 CDN) in a single month by “sharing links on Facebook.” She also claimed that the network was based in France, Spain and Italy. Both women abruptly ended all communication with us after initially agreeing to an interview.

Shortly after this, the fake profiles started disappearing. It’s probably no coincidence that the profiles to which Mehdi had access in 2016 disappeared as well.

To me, it’s clear that Pablo and Mehdi are not running this network by themselves. What we’re seeing is most likely several different interconnected networks that co-operate to attract a mutually beneficial audience. Another part of the network, based in northern France and Belgium, seems to run a slightly different scheme, using fake profiles to attract men towards Snapchat accounts. These accounts seem to be running a cyberprostitution ring. But that’s a story for another day.

With regards to the network run by Pablo and Mehdi, its disappearance – which is probably only temporary – allowed me to better understand its scope. The profiles seem to have been deactivated rather than deleted outright. What’s more, Snapchat accounts related to some of the fake Facebook profiles run by the network have continued sharing fake pornography links, using the same tactic as on Facebook.

A young woman in underwear lying on her stomach on a bed. We do not see her face.
« I was alone at home, I made a hot video... Who wants to see it? Slide the screen up. »

- Émilie Hébert

After having analyzed the HTML code of the webpages from which these links stem, I was able to determine that the network uses a CPA (Cost Per Action) marketing service. By entering a script on a webpage, the network automatically redirects its victims towards fraudulent dating sites, where they’re asked to enter their personal details, including their credit card numbers.

From what I’ve been able to see on the CPA company’s webpage, the network can make up to 28 euros every time someone they sent to the dating site signs up. When we know some of these links can generate thousands of likes and comments on Facebook and that their potential audience can be in the tens or even hundreds of thousands of people, the money that can be made this way is substantial. If we believe what we see on his Snapchat and Instagram accounts, Mehdi seems to be living the life of a globetrotter these days – an expensive hobby.

Are Mehdi and Pablo behind everything that goes on in the network, from A to Z? It would be impossible to tell. Perhaps the network “rents” its audience to fraudsters in exchange for a cut of the profits. Or maybe fraudsters have found out that the network’s posts are perfect hunting grounds. What we know, though, is that the entire process is in place, and it seems to be working well.

And what about Béatrice in all of this?

She seems well, but I never managed to find out who’s behind her profile. She recently stopped sharing sexy pictures.

She started her old scheme again, sharing pictures of sick or handicapped people.

Let's block ads! (Why?)

Read the whole story
127 days ago
Share this story

An anti-aging strategy that works in mice is about to be tested in humans


Jan van Deursen was baffled by the decrepit-looking transgenic mice he created in 2000. Instead of developing tumours as expected, the mice experienced a stranger malady. By the time they were three months old, their fur had grown thin and their eyes were glazed with cataracts. It took him years to work out why: the mice were ageing rapidly, their bodies clogged with a strange type of cell that did not divide, but that wouldn't die.

That gave van Deursen and his colleagues at Mayo Clinic in Rochester, Minnesota, an idea: could killing off these 'zombie' cells in the mice delay their premature descent into old age? The answer was yes. In a 2011 study, the team found that eliminating these 'senescent' cells forestalled many of the ravages of age. The discovery set off a spate of similar findings. In the seven years since, dozens of experiments have confirmed that senescent cells accumulate in ageing organs, and that eliminating them can alleviate, or even prevent, certain illnesses (see 'Becoming undead'). This year alone, clearing the cells in mice has been shown to restore fitness, fur density and kidney function. It has also improved lung disease and even mended damaged cartilage. And in a 2016 study, it seemed to extend the lifespan of normally ageing mice.

“Just by removing senescent cells, you could stimulate new tissue production,” says Jennifer Elisseeff, senior author of the cartilage paper and a biomedical engineer at Johns Hopkins University in Baltimore, Maryland. It jump-starts some of the tissue's natural repair mechanisms, she says.

This anti-ageing phenomenon has been an unexpected twist in the study of senescent cells, a common, non-dividing cell type first described more than five decades ago. When a cell enters senescence—and almost all cells have the potential to do so—it stops producing copies of itself, begins to belch out hundreds of proteins, and cranks up anti-death pathways full blast. A senescent cell is in its twilight: not quite dead, but not dividing as it did at its peak.

Now biotechnology and pharmaceutical companies are keen to test drugs—known as senolytics—that kill senescent cells in the hope of rolling back, or at least forestalling, the ravages of age. Unity Biotechnology in San Francisco, California, co-founded by van Deursen, plans to conduct multiple clinical trials over the next two-and-a-half years, treating people with osteoarthritis, eye diseases and pulmonary diseases. At Mayo, gerontologist James Kirkland, who took part in the 2011 study, is cautiously beginning a handful of small, proof-of-concept trials that pit senolytic drugs against a range of age-related ailments. “I lose sleep at night because these things always look good in mice or rats, but when you get to people you hit a brick wall,” says Kirkland.

No other anti-ageing elixir has yet cleared that wall, and for a few good reasons. It's next to impossible to get funding for clinical trials that measure an increase in healthy lifespan. And even as a concept, ageing is slippery. The US Food and Drug Administration has not labelled it a condition in need of treatment.

Still, if any of the trials offer “a whiff of human efficacy”, says Unity's president, Ned David, there will be a massive push to develop treatments and to better understand the fundamental process of ageing. Other researchers who study the process are watching closely. Senolytics are “absolutely ready” for clinical trials, says Nir Barzilai, director of the Institute for Aging Research at the Albert Einstein College of Medicine in New York City. “I think senolytics are drugs that could come soon and be effective in the elderly now, even in the next few years.”

Credit: Nature, October 24, 2017, doi:10.1038/550448a

The dark side

When microbiologists Leonard Hayflick and Paul Moorhead coined the term senescence in 1961, they suggested that it represented ageing on a cellular level. But very little research was done on ageing at the time, and Hayflick recalls people calling him an idiot for making the observation. The idea was ignored for decades.

Although many cells do die on their own, all somatic cells (those other than reproductive ones) that divide have the ability to undergo senescence. But, for a long time, these twilight cells were simply a curiosity, says Manuel Serrano of the Institute for Research in Biomedicine in Barcelona, Spain, who has studied senescence for more than 25 years. “We were not sure if they were doing something important.” Despite self-disabling the ability to replicate, senescent cells stay metabolically active, often continuing to perform basic cellular functions.

By the mid-2000s, senescence was chiefly understood as a way of arresting the growth of damaged cells to suppress tumours. Today, researchers continue to study how senescence arises in development and disease. They know that when a cell becomes mutated or injured, it often stops dividing—to avoid passing that damage to daughter cells. Senescent cells have also been identified in the placenta and embryo, where they seem to guide the formation of temporary structures before being cleared out by other cells.

But it wasn't long before researchers discovered what molecular biologist Judith Campisi calls the “dark side” of senescence. In 2008, three research groups, including Campisi's at the Buck Institute for Research on Aging in Novato, California, revealed that senescent cells excrete a glut of molecules—including cytokines, growth factors and proteases—that affect the function of nearby cells and incite local inflammation. Campisi's group described this activity as the cell's senescence-associated secretory phenotype, or SASP. In recent unpublished work, her team identified hundreds of proteins involved in SASPs.

In young, healthy tissue, says Serrano, these secretions are probably part of a restorative process, by which damaged cells stimulate repair in nearby tissues and emit a distress signal prompting the immune system to eliminate them. Yet at some point, senescent cells begin to accumulate—a process linked to problems such as osteoarthritis, a chronic inflammation of the joints, and atherosclerosis, a hardening of the arteries. No one is quite sure when or why that happens. It has been suggested that, over time, the immune system stops responding to the cells.

Surprisingly, senescent cells turn out to be slightly different in each tissue. They secrete different cytokines, express different extracellular proteins and use different tactics to avoid death. That incredible variety has made it a challenge for labs to detect and visualize senescent cells. “There is nothing definitive about a senescent cell. Nothing. Period,” says Campisi.

In fact, even the defining feature of a senescent cell—that it does not divide—is not written in stone. After chemotherapy, for example, cells take up to two weeks to become senescent, before reverting at some later point to a proliferating, cancerous state, says Hayley McDaid, a pharmacologist at Albert Einstein College of Medicine. In support of that idea, a large collaboration of researchers found this year that removing senescent cells right after chemotherapy, in mouse models for skin and breast cancer, makes the cancer less likely to spread.

The lack of universal features makes it hard to take inventory of senescent cells. Researchers have to use a large panel of markers to search for them in tissue, making the work laborious and expensive, says van Deursen. A universal marker for senescence would make the job much easier—but researchers know of no specific protein to label, or process to identify. “My money would be on us never finding a senescent-specific marker,” Campisi adds. “I would bet a good bottle of wine on that.”

Earlier this year, however, one group did develop a way to count these cells in tissue. Valery Krizhanovsky and his colleagues at the Weizmann Institute of Science in Rehovot, Israel, stained tissues for molecular markers of senescence and imaged them to analyse the number of senescent cells in tumours and aged tissues from mice. “There were quite a few more cells than I actually thought that we would find,” says Krizhanovsky. In young mice, no more than 1% of cells in any given organ were senescent. In two-year-old mice, however, up to 20% of cells were senescent in some organs.

But there's a silver lining to these elusive twilight cells: they might be hard to find, but they're easy to kill.

Out with the old

In November 2011, while on a three-hour flight, David read van Deursen and Kirkland's just-published paper about eliminating zombie cells. Then he read it again, and then a third time. The idea “was so simple and beautiful”, recalls David. “It was almost poetic.” When the flight landed, David, a serial biotech entrepreneur, immediately rang van Deursen, and within 72 hours had convinced him to meet to discuss forming an anti-ageing company.

Kirkland, together with collaborators at the Sanford Burnham Medical Research Institute in La Jolla, California, initially attempted a high-throughput screen to quickly identify a compound that would kill senescent cells. But they found it to be “a monumental task” to tell whether a drug was affecting dividing or non-dividing cells, Kirkland recalls. After several failed attempts, he took another tack.

Senescent cells depend on protective mechanisms to survive in their 'undead' state, so Kirkland, in collaboration with Laura Niedernhofer and others from the Scripps Research Institute in Jupiter, Florida, began seeking out those mechanisms. They identified six signalling pathways that prevent cell death, which senescent cells activate to survive.

Then it was just a matter of finding compounds that would disrupt those pathways. In early 2015, the team identified the first senolytics: an FDA-approved chemotherapy drug, dasatinib, which eliminates human fat-cell progenitors that have turned senescent; and a plant-derived health-food supplement, quercetin, which targets senescent human endothelial cells, among other cell types. The combination of the two—which work better together than apart—alleviates a range of age-related disorders in mice.

Ten months later, Daohong Zhou at the University of Arkansas for Medical Sciences in Little Rock and his colleagues identified a senolytic compound now known as navitoclax, which inhibits two proteins in the BCL-2 family that usually help the cells to survive. Similar findings were reported within weeks by Kirkland's lab and Krizhanovsky's lab.

By now, 14 senolytics have been described in the literature, including small molecules, antibodies and, in March this year, a peptide that activates a cell-death pathway and can restore lustrous hair and physical fitness to ageing mice.

So far, each senolytic kills a particular flavour of senescent cell. Targeting the different diseases of ageing, therefore, will require multiple types of senolytics. “That's what's going to make this difficult: each senescent cell might have a different way to protect itself, so we'll have to find combinations of drugs to wipe them all out,” says Niedernhofer. Unity maintains a large atlas documenting which senescent cells are associated with which disease; any weaknesses unique to given kinds of cell, and how to exploit those flaws; and the chemistry required to build the right drug for a particular tissue. There is no doubt that for different indications, different types of drug will need to be developed, says David. “In a perfect world, you wouldn't have to. But sadly, biology did not get that memo.”

For all the challenges, senolytic drugs have several attractive qualities. Senescent cells will probably need to be cleared only periodically—say, once a year—to prevent or delay disease. So the drug is around for only a short time. This type of 'hit and run' delivery could reduce the chance of side effects, and people could take the drugs during periods of good health. Unity plans to inject the compounds directly into diseased tissue, such as a knee joint in the case of osteoarthritis, or the back of the eye for someone with age-related macular degeneration.

And unlike cancer, in which a single remaining cell can spark a new tumour, there's no need to kill every senescent cell in a tissue: mouse studies suggest that dispatching most of them is enough to make a difference. Finally, senolytic drugs will clear only senescent cells that are already present—they won't prevent the formation of such cells in the future, which means that senescence can continue to perform its original tumour-suppressing role in the body.

Those perks haven't convinced everybody of the power of senolytics. Almost 60 years after his initial discovery, Hayflick now believes that ageing is an inexorable biophysical process that cannot be altered by eliminating senescent cells. “Efforts to interfere with the ageing process have been going on since recorded human history,” says Hayflick. “And we know of nothing—nothing—that has demonstrated to interfere with the ageing process.”

Fans of senolytics are much more optimistic, emboldened by recent results. Last year, van Deursen's lab went beyond its tests on super-aged mice and showed that killing off senescent cells in normally ageing mice delayed the deterioration of organs associated with ageing, including the kidney and heart. And—to the joy of anti-ageing enthusiasts everywhere—it extended the animals' median lifespan by about 25%.

Successful results from mouse studies have already lured seven or eight companies into the field, Kirkland estimates. At Mayo, one clinical trial has opened, pitting dasatinib and quercetin in combination against chronic kidney disease. Kirkland plans to try other senolytics against different age-related diseases. “We want to use more than one set of agents across the trials and look at more than one condition,” he says.

If eliminating senescent cells in humans does improve age-related illnesses, researchers will aim to create broader anti-ageing therapies, says David. In the meantime, researchers in the field insist that no one should take these drugs until proper safety tests in humans are complete. In rodents, senolytic compounds have been shown to delay wound healing, and there could be additional side effects. “It's just too dangerous,” says Kirkland.

Van Deursen says that continuing to answer basic biological questions is the field's best shot at success. “Only then will we be able to understand what ageing really is, and how we can, in an intelligent way, interfere with it.”

This article is reproduced with permission and wasfirst publishedon October 24, 2017.

Let's block ads! (Why?)

Read the whole story
142 days ago
142 days ago
Share this story
Next Page of Stories