29 stories

Gluon – Deep Learning API from AWS and Microsoft

1 Share

Post by Dr. Matt Wood

Today, AWS and Microsoft announced Gluon, a new open source deep learning interface which allows developers to more easily and quickly build machine learning models, without compromising performance.

Gluon Logo

Gluon provides a clear, concise API for defining machine learning models using a collection of pre-built, optimized neural network components. Developers who are new to machine learning will find this interface more familiar to traditional code, since machine learning models can be defined and manipulated just like any other data structure. More seasoned data scientists and researchers will value the ability to build prototypes quickly and utilize dynamic neural network graphs for entirely new model architectures, all without sacrificing training speed.

Gluon is available in Apache MXNet today, a forthcoming Microsoft Cognitive Toolkit release, and in more frameworks over time.

Neural Networks vs Developers
Machine learning with neural networks (including ‘deep learning’) has three main components: data for training; a neural network model, and an algorithm which trains the neural network. You can think of the neural network in a similar way to a directed graph; it has a series of inputs (which represent the data), which connect to a series of outputs (the prediction), through a series of connected layers and weights. During training, the algorithm adjusts the weights in the network based on the error in the network output. This is the process by which the network learns; it is a memory and compute intensive process which can take days.

Deep learning frameworks such as Caffe2, Cognitive Toolkit, TensorFlow, and Apache MXNet are, in part, an answer to the question ‘how can we speed this process up? Just like query optimizers in databases, the more a training engine knows about the network and the algorithm, the more optimizations it can make to the training process (for example, it can infer what needs to be re-computed on the graph based on what else has changed, and skip the unaffected weights to speed things up). These frameworks also provide parallelization to distribute the computation process, and reduce the overall training time.

However, in order to achieve these optimizations, most frameworks require the developer to do some extra work: specifically, by providing a formal definition of the network graph, up-front, and then ‘freezing’ the graph, and just adjusting the weights.

The network definition, which can be large and complex with millions of connections, usually has to be constructed by hand. Not only are deep learning networks unwieldy, but they can be difficult to debug and it’s hard to re-use the code between projects.

The result of this complexity can be difficult for beginners and is a time-consuming task for more experienced researchers. At AWS, we’ve been experimenting with some ideas in MXNet around new, flexible, more approachable ways to define and train neural networks. Microsoft is also a contributor to the open source MXNet project, and were interested in some of these same ideas. Based on this, we got talking, and found we had a similar vision: to use these techniques to reduce the complexity of machine learning, making it accessible to more developers.

Enter Gluon: dynamic graphs, rapid iteration, scalable training
Gluon introduces four key innovations.

  1. Friendly API: Gluon networks can be defined using a simple, clear, concise code – this is easier for developers to learn, and much easier to understand than some of the more arcane and formal ways of defining networks and their associated weighted scoring functions.
  2. Dynamic networks: the network definition in Gluon is dynamic: it can bend and flex just like any other data structure. This is in contrast to the more common, formal, symbolic definition of a network which the deep learning framework has to effectively carve into stone in order to be able to effectively optimizing computation during training. Dynamic networks are easier to manage, and with Gluon, developers can easily ‘hybridize’ between these fast symbolic representations and the more friendly, dynamic ‘imperative’ definitions of the network and algorithms.
  3. The algorithm can define the network: the model and the training algorithm are brought much closer together. Instead of separate definitions, the algorithm can adjust the network dynamically during definition and training. Not only does this mean that developers can use standard programming loops, and conditionals to create these networks, but researchers can now define even more sophisticated algorithms and models which were not possible before. They are all easier to create, change, and debug.
  4. High performance operators for training: which makes it possible to have a friendly, concise API and dynamic graphs, without sacrificing training speed. This is a huge step forward in machine learning. Some frameworks bring a friendly API or dynamic graphs to deep learning, but these previous methods all incur a cost in terms of training speed. As with other areas of software, abstraction can slow down computation since it needs to be negotiated and interpreted at run time. Gluon can efficiently blend together a concise API with the formal definition under the hood, without the developer having to know about the specific details or to accommodate the compiler optimizations manually.

The team here at AWS, and our collaborators at Microsoft, couldn’t be more excited to bring these improvements to developers through Gluon. We’re already seeing quite a bit of excitement from developers and researchers alike.

Getting started with Gluon
Gluon is available today in Apache MXNet, with support coming for the Microsoft Cognitive Toolkit in a future release. We’re also publishing the front-end interface and the low-level API specifications so it can be included in other frameworks in the fullness of time.

You can get started with Gluon today. Fire up the AWS Deep Learning AMI with a single click and jump into one of 50 fully worked, notebook examples. If you’re a contributor to a machine learning framework, check out the interface specs on GitHub.

-Dr. Matt Wood

Let's block ads! (Why?)

Read the whole story
6 days ago
Share this story

Changes in Password Best Practices

2 Comments and 22 Shares

NIST recently published its four-volume SP800-63-3 Digital Identity Guidelines. Among other things, it makes three important suggestions when it comes to passwords:

  1. Stop it with the annoying password complexity rules. They make passwords harder to remember. They increase errors because artificially complex passwords are harder to type in. And they don't help that much. It's better to allow people to use pass phrases.

  2. Stop it with password expiration. That was an old idea for an old way we used computers. Today, don't make people change their passwords unless there's indication of compromise.

  3. Let people use password managers. This is how we deal with all the passwords we need.

These password rules were failed attempts to fix the user. Better we fix the security systems.

Read the whole story
7 days ago
9 days ago
Share this story
2 public comments
8 days ago
A meeting recently:
Developer Team: Our passwords require special characters, and max out at 30 characters.
Me: Why on EARTH did you do any of that? Why do you have a max?
Devs: Because ... it's hard to remember something long? How long do you want it to be?
Me: ... Get rid of the max. Get rid of the special characters.
CIO: Wait. Why do we have passwords at all? Can we link to google/linkedin/facebook and make it their problem? We are not in the security business.
Devs: Yes!
9 days ago
I’ve been happy watching such sensible guidelines make it through the review process
Washington, DC

Yahoo Triples Estimate of Breached Accounts to 3B

1 Share

A massive data breach at Yahoo in 2013 was far more extensive than previously disclosed, affecting all of its 3 billion user accounts, new parent company Verizon Communications Inc. said on Tuesday.

The figure, which Verizon said was based on new information, is three times the 1 billion accounts Yahoo said were affected when it first disclosed the breach in December 2016. The new disclosure, four months after Verizon completed its acquisition of Yahoo, shows that executives are still coming to grips with the extent of the...

Let's block ads! (Why?)

Read the whole story
13 days ago
Share this story

HTTP is obsolete. It's time for the Distributed Web (2015)



Early this year, the Internet Archive put out a call for a distributed web. We heard them loud and clear.

Today I’m making an announcement that begins our long journey to the future of the web. A web that is faster, more secure, more robust, and more permanent.

Neocities has collaborated with Protocol Labs to become the first major site to implement IPFS in production. Starting today, all Neocities web sites are available for viewing, archiving, and hosting by any IPFS node in the world. When another IPFS node chooses to host a site from Neocities, that version of the site will continue to be available, even if Neocities shuts down or stops hosting it. The more IPFS nodes seed Neocities sites, the more available (and redundant) Neocities sites become. And the less centrally dependent the sites are on us to continue existing.

What is IPFS? From their README:

IPFS is a distributed file system that seeks to connect all computing devices with the same system of files. In some ways, this is similar to the original aims of the Web, but IPFS is actually more similar to a single bittorrent swarm exchanging git objects. IPFS could become a new major subsystem of the internet. If built right, it could complement or replace HTTP. It could complement or replace even more. It sounds crazy. It is crazy.

IPFS is still in the alpha stages of development, so we’re calling this an experiment for now. It hasn’t replaced our existing site storage (yet). Like with any complex new technology, there’s a lot of improvements to make. But IPFS isn’t vaporware, it works right now. You can try it out on your own computer, and already can use it to help us serve and persist Neocities sites.

The message I want to send couldn’t possibly be more audacious: I strongly believe IPFSis the replacement to HTTP (and many other things), and now’s the time to start trying it out. Replacing HTTP sounds crazy. It is crazy! But HTTP is broken, and the craziest thing we could possibly do is continue to use it forever. We need to apply state-of-the-art computer science to the distribution problem, and design a better protocol for the web.

Part 1: What’s wrong with HTTP?

The Hypertext Transfer Protocol (HTTP) has unified the entire world into a single global information protocol, standardizing how we distribute and present information to eachother.

It is inconceivable for me to even think about what life would be like without it. HTTP dropped the cost of publishing content to almost nothing, an innovation that took a sledgehammer to the top-down economic, political, and cultural control over distribution of information (music, ideas, video, news, games, everything). As a result of liquifying information and making it the publication of it more egalitarian and accessible, HTTP has made almost everything about our culture better.

I love HTTP, and I always will. It truly stands among the greatest and most important inventions of all time.

But while HTTP has achieved many things, it’s usefulness as a foundation for the distribution and persistence of the sum of human knowledge isn’t just showing some cracks, it’s crumbling to pieces right in front of us. The way HTTP distributes content is fundamentally flawed, and no amount of performance tuneups or forcing broken CA SSL or whatever are going to fix that. HTTP/2 is a welcome improvement, but it’s a conservative update to a technology that’s beginning to show its age. To have a better future for the web, we need more than a spiced up version of HTTP, we need a new foundation. And per the governance model of cyberspace, that means we need a new protocol. IPFS, I’m strongly hoping, becomes that new protocol.

HTTP is brittle


This is a picture of the first HTTP web server in the world. It was Tim Berners-Lee’s NeXT computer at CERN.

Pasted on the machine is an ominous sticker: “This machine is a server, do not power it down!!”.

The reason it couldn’t be powered down is that web sites on other servers were starting to link to it. Once they linked to it, they then depended on that machine continuing to exist. If the machine was powered down, the links stopped working. If the machine failed or was no longer accessible at the same location, a far worse thing happened: the chain between sites becomes permanently broken, and the ability to access that content is lost forever. That sticker perfectly highlights the biggest problem with HTTP: it erodes.

Tim’s NeXT cube is now a museum piece. The first of millions of future dead web servers.

You’ve seen the result:


Even if you’ve never read the HTTP spec, you probably know what 404 means. It’s the error code used by HTTP to indicate that the site is no longer on the server at that location. Usually you’re not even that lucky. More often, there isn’t even a server there anymore to tell you that the content you’re looking for is gone, and it has no way to help you find it. And unless the Internet Archive backed it up, you’ll never find it again. It becomes lost, forever.

The older a web page is, the more likely it is you’ll see 404 pages. They’re the cold-hearted digital tombstones of a dying web, betraying nothing about what knowledge, beauty, or irreverent stupidity may have once resided there.


One of my favorite sites from the 90s web was Mosh to Yanni, and viewing the site today gives a very strong example of how inadequate HTTP is for maintaining links between sites. All the static content stored with the site still loads, and my modern browser still renders the page (HTML, unlike HTTP, has excellent lasting power). But any links offsite or to dynamically served content are dead. For every weird example like this, there are countless examples of incredibly useful content that have also long since vanished. Whether eroding content is questionable crap or timelessly useful, it’s still our history, and we’re losing it fast.

The reason this happens is simple: centrally managed web servers inevitably shut down. The domain changes ownership, or the company that ran it goes out of business. Or the computer crashes, without having a backup to restore the content with. Having everyone run their own personal HTTP server doesn’t solve this. If anything, it probably makes it worse.

HTTP encourages hypercentralization

The result of this erosion of data has been further dependence on larger, more organized centralized services. Their short-term availability tends to be (mostly) good due to redundant backups. But this still doesn’t address long-term availability, and creates a whole new set of problems.

We’ve come a long way since John Perry Barlow’s A Declaration of the Independence of Cyberspace. As our electronic country becomes more influential and facilitates the world with more information, governments and corporations alike have started to pry into HTTP’s flaws, using them to spy on us, monetize us, and block our access to any content that represents a threat to them, legitimate or otherwise.


The web we were intended to have was decentralized, but the web we have today is very quickly becoming centralized, as billions of users become dependent on a small handful of services.

Regardless of whether you think this is a legitimate tradeoff, this was not how HTTP was intended to be used. Organizations like the NSA (and our future robot overlords) now only have to intercept our communications at a few sources to spy on us. It makes it easy for governments to censor content at their borders by blocking the ability for sites to access these highly centralized resources. It also puts our communications at risk of being interrupted by DDoS attacks.

Distributing the web would make it less malleable by a small handful of powerful organizations, and that improves both our freedom and our independence. It also reduces the risk of the “one giant shutdown” that takes a massive amount of data with it.

HTTP is inefficient

As of this writing, Gangnam Style now has over 2,344,327,696 views. Go ahead, watch it again. I’ll wait for you.

Let’s make some assumptions. The video clocks in at 117 Megabytes. That means (at most) 274,286,340,432 Megabytes, or 274.3 Petabytes of data for the video file alone has been sent since this was published. If we assume a total expense of 1 cent per gigabyte (this would include bandwidth and all of the server costs), $2,742,860 has been spent on distributing this one file so far.

That’s not too bad… if you’re Google. But if you’re a smaller site, the cost to serve this much data would be astronomical, especially when bandwidth rates for small players start around $0.12 per gigabyte and go as high a $0.20 in Asia. I’ve spent the better part of my work at Neocities battling expensive bandwidth to ensure we can keep running our infrastructure at low cost.

HTTP lowered the price of publishing, but it still costs money, and these costs can really add up. Distributing this much data from central datacenters is potentially very expensive if not done at economies of scale.

What if, instead of always serving this content from datacenters, we could turn every computer on an ISP’s network into a streaming CDN? With a video as popular as Gangnam Style, it could even be completely downloaded from within an ISP’s network, not requiring numerous hops over the internet backbone. This is one of the many things IPFS is capable of improving (we’ll discuss this in a bit).

HTTP creates overdependence on the Internet backbone

When content is hypercentralized, it makes us highly dependent on the internet backbones to the datacenters functioning. Aside from making it easy for governments to block and censor content, there are also reliability problems. Even with redundancies, major backbones sometimes get damaged, or routing tables go haywire, and the consequences can be drastic.

I got a weird taste of that a few months ago, when Neocities slowed down after a car crashed into a fiber uplink we use in Canada (no suspects yet, but a few promising leads). I’ve also heard stories where hunters have shot at the fiber cables connecting the eastern Oregon datacenters (the enormous ones that store a lot of data), requiring engineers to show up on snowmobiles with cross country skis to repair the fiber lines. Since I wrote this post, details have emerged on a sophisticated attack on fiber lines happening in the Bay Area. The point is, the internet backbone isn’t perfect, it’s easy to attack it, and it’s easy for service to get affected by a few important fiber lines getting cut.

Part 2: How IPFS solves these problems

We’ve discussed HTTP’s problems (and the problems of hypercentralization). Now let’s talk about how IPFS, and how it can help improve the web.

IPFS fundamentally changes the way we look for things, and this is it’s key feature. With HTTP, you search for locations. With IPFS, you search for content.

Let me show you an example. This is a file on a server I run: https://neocities.org/img/neocitieslogo.svg. Your browser first finds the location (IP address) of the server, then asks my server for the file using the path name. With that design, only the owner (me) can determine that this is the file you’re looking for, and you are forced to trust that I don’t change it on you by moving the file, or shutting the server down.

Instead of looking for a centrally-controlled location and asking it what it thinks /img/neocitieslogo.svg is, what if we instead asked a distributed network of millions of computers not for the name of a file, but for the content that is supposed to be in the file?

This is precisely what IPFS does.

When neocitieslogo.svg is added to my IPFS node, it gets a new name: QmXGTaGWTT1uUtfSb2sBAvArMEVLK4rQEcQg5bv7wwdzwU. That name is actually a cryptographic hash, which has been computed from the contents of that file. That hash is guaranteed by cryptography to always only represent the contents of that file. If I change that file by even one bit, the hash will become something completely different.

When I ask the IPFS distributed network for that hash, it efficiently (20 hops for a network of 10,000,000) finds the nodes that have the data using a Distributed Hash Table, retrieves it, and verifies using the hash that it’s the correct data. Early DHT designs had issues with Sybil attacks, but we have new ways to address them, and I’m very confident this is a solvable problem (unlike the problems with HTTP, which are just going to be broken forever).

IPFS is general purpose, and has little in the way of storage limitations. It can serve files that are large or small. It automatically breaks up larger files into smaller chunks, allowing IPFS nodes to download (or stream) files from not just one server like with HTTP, but hundreds of them simultaneously. The IPFS network becomes a finely-grained, trustless, distributed, easily federated Content Delivery Network (CDN). This is useful for pretty much everything involving data: images, video streaming, distributed databases, entire operating systems, blockchains, backups of 8 inch floppy disks, and most important for us, static web sites.

IPFS files can also be special IPFS directory objects, which allow you to use human readable filenames (which transparently link to other IPFS hashes). You can load the directory’s index.html by default, the same way a standard HTTP server does. Using directory objects, IPFS allows you to make static web sites exactly the same way you make them today. It’s a single command to add your web site to an IPFS node: ipfs add -r yoursitedirectory. After that, it’s available from any IPFS node without requiring you to link to any hashes in the HTML (example, and example with index.html renamed).

Federating data with IPFS

IPFS doesn’t require every node to store all of the content that has ever been published to IPFS. Instead, you choose what data you want to help persist. Think of it like bookmarks, except instead of bookmarking a link to a site that will eventually fail, you back up the entire site for yourself, and volunteer to help to serve the content to others that want to look at it.

If a lot of nodes host a little bit, these little bits quickly add up to more space, bandwidth and availablity than any centralized HTTP service could ever possibly provide. The distributed web will quickly become the fastest, most available, and largest store of data on the planet earth. And nobody will have the power to “burn books” by turning it all off. This Library of Alexandria is never going to burn down.

Copying, storing and helping serve web sites from other IPFS nodes is easy. It just takes a single command and the hash of the site: ipfs pin add -r QmcKi2ae3uGb1kBg1yBpsuwoVqfmcByNdMiZ2pukxyLWD8. IPFS takes care of the rest.


IPFS hashes represent immutable data, which means they cannot be changed without the hash being different. This is a good thing because it encourages data persistence, but we still need a way to find the latest IPFS hash representing your site. IPFS accomplishes this using a special feature called IPNS.

IPNS allows you to use a private key to sign a reference to the IPFS hash representing the latest version of your site using a public key hash (pubkeyhash for short). If you’ve used Bitcoin before, you’re familiar with this - a Bitcoin address is also a pubkeyhash. With our Neocities IPFS node, I signed the image of Penelope (our site mascot) and you can load it using our IPNS pubkeyhash for that node: QmTodvhq9CUS9hH8rirt4YmihxJKZ5tYez8PtDmpWrVMKP.

IPNS isn’t done yet, so if that link doesn’t work, don’t fret. Just know that I will be able to change what that pubkeyhash points to, but the pubkeyhash will always remain the same. When it’s done, it will solve the site updating problem.

Now we just need to make the location of these sites human-readable, and we’ve got all the pieces we need.

Human-readable mutable addressing

IPFS/IPNS hashes are big, ugly strings that aren’t easy to memorize. So IPFS allows you to use the existing Domain Name System (DNS) to provide human-readable links to IPFS/IPNS content. It does this by allowing you to insert the hash into a TXT record on your nameserver (if you have a command line handy, run this: dig TXT ipfs.git.sexy). You can see this in action by visiting http://ipfs.io/ipns/ipfs.git.sexy/.

Going forward, IPFS has plans to also support Namecoin, which could theoretically be used to create a completely decentralized, distributed web that has no requirements for a central authority in the entire chain. No ICANN, no central servers, no politics, no expensive certificate “authorities”, and no choke points. It sounds crazy. It is crazy. And yet, it’s completely possible with today’s technology!

IPFS HTTP gateway: The bridge between the old web and the new

The IPFS implementation ships with an HTTP gateway I’ve been using to show examples, allowing current web browsers to access IPFS until the browsers implement IPFS directly (too early? I don’t care). With the IPFS HTTP gateway (and a little nginx shoe polish), we don’t have to wait. We can soon start switching over to IPFS for storing, distributing, and serving web sites.

How we’re using IPFS now

Our initial implementation of IPFS is experimental and modest, for now. Neocities will be publishing an IPFS hash once per day when sites are updated, accessible from every site profile. This hash will point to the latest version of the site, and be accessible via our IPFS HTTP gateway. Because the IPFS hash changes for each update, this also enables us to provide an archive history for all the sites, something we automagically just get from the way that IPFS works anyways.

How we’ll use IPNS in the future

Long-term, if things go well, we want to use IPFS for storing all of our sites, and issue IPNS keys for each site. This would enable users to publish content to their site independently of us. If we do it right, even if Neocities doesn’t exist anymore, our users can still update their sites. We effectively take our user’s central dependence on our servers and smash it to pieces, permanently ruining our plans for centralized world domination forever. It sounds awesome. It is awesome!

It’s still early, and there’s much work to do before IPFS can replace HTTP without needing to describe the idea as crazy. But there’s no time like the present to plan for the future. It’s time for us to get to work. Accept the Internet Archive’s challenge: distribute the web.

– Kyle

Let's block ads! (Why?)

Read the whole story
17 days ago
18 days ago
Share this story

TLS 1.2 Session Tickets


More specifically, TLS 1.2 Session Tickets.

Session Tickets, specified in RFC 5077, are a technique to resume TLS sessions by storing key material encrypted on the clients. In TLS 1.2 they speed up the handshake from two to one round-trips.

Unfortunately, a combination of deployment realities and three design flaws makes them the weakest link in modern TLS, potentially turning limited key compromise into passive decryption of large amounts of traffic.

How Session Tickets work

A modern TLS 1.2 connection starts like this:

  • The client sends the supported parameters;
  • the server chooses the parameters and sends the certificate along with the first half of the Diffie-Hellman key exchange;
  • the client sends the second half of the Diffie-Hellman exchange, computes the session keys and switches to encrypted communication;
  • the server computes the session keys and switches to encrypted communication.

This involves two round-trips between client and server before the connection is ready for application data.

normal 1.2

The Diffie-Hellman key exchange is what provides Forward Secrecy: even if the attacker obtains the certificate key and a connection transcript after the connection ended they can't decrypt the data, because they don't have the ephemeral session key.

Forward Secrecy also translates into security against a passive attacker. An attacker that can wiretap but not modify the traffic has the same capabilities of an attacker that obtains a transcript of the connection after it's over. Preventing passive attacks is important because they can be carried out at scale with little risk of detection.

Session Tickets reduce the overhead of the handshake. When a client supports Session Tickets, the server will encrypt the session key with a key only the server has, the Session Ticket Encryption Key (STEK), and send it to the client. The client holds on to that encrypted session key, called a ticket, and to the corresponding session key. The server forgets about the client, allowing stateless deployments.

The next time the client wants to connect to that server it sends the ticket along with the initial parameters. If the server still has the STEK it will decrypt the ticket, extract the session key, and start using it. This establishes a resumed connection and saves a round-trip by skipping the key negotiation. Otherwise, client and server fallback to a normal handshake.

resumed 1.2

For a recap you can also watch the first part of my 33c3 talk.

Fatal flaw #1

The first problem with 1.2 Session Tickets is that resumed connections don't perform any Diffie-Hellman exchange, so they don't offer Forward Secrecy against the compromise of the STEK. That is, an attacker that obtains a transcript of a resumed connection and the STEK can decrypt the whole conversation.

How the specification solves this is by stating that STEKs must be rotated and destroyed periodically. I now believe this to be extremely unrealistic.

Session Tickets were expressly designed for stateless server deployments, implying scenarios where there are multiple servers serving the same site without shared state. These server must also share STEKs or resumption wouldn't work across them.

As soon as a key requires distribution it's exposed to an array of possible attacks that an ephemeral key in memory doesn't face. It has to be generated somewhere, and transmitted somehow between the machines, and that transmission might be recorded or persisted. Twitter wrote about how they faced and approached exactly this problem.

Moreover, an attacker that compromises a single machine can now decrypt traffic flowing through other machines, potentially violating security assumptions.

Finally, if a key is not properly rotated it allows an attacker to decrypt past traffic upon compromise.

TLS 1.3 solves this by supporting Diffie-Hellman along with Session Tickets, but TLS 1.2 was not yet structured to support one round trip Diffie-Hellman (because of the legacy static RSA structure).

These observations are not new, Adam Langley wrote about them in 2013 and TLS 1.3 was indeed built to address them.

Fatal flaw #2

Session Tickets contain the session keys of the original connection, so a compromised Session Ticket lets the attacker decrypt not only the resumed connection, but also the original connection.

This potentially degrades the Forward Secrecy of non-resumed connections, too.

The problem is exacerbated when a session is regularly resumed, and the same session keys keep getting re-wrapped into new Session Tickets (a resumed connection can in turn generate a Session Ticket), possibly with different STEKs over time. The same session key can stay in use for weeks or even months, weakening Forward Secrecy.

TLS 1.3 addresses this by effectively hashing (a one-way function) the current keys to obtain the keys for the resumed connection. While hashing is a pretty obvious solution, in TLS 1.2 there was no structured key schedule, so there was no easy agnostic way to specify how keys should be derived for each different cipher suite.

Fatal flaw #3

The NewSessionTicket message containing the Session Ticket is sent from the server to the client just before the ChangeCipherSpec message.

Client                                               Server

ClientHello                  -------->  
                             <--------      ServerHelloDone
Finished                     -------->  
                             <--------             Finished
Application Data             <------->     Application Data  

The ChangeCipherSpec message enables encryption with the session keys and the negotiated cipher, so everything exchanged during the handshake before that message is sent in plaintext.

This means that Session Tickets are sent in the clear at the beginning of the original connection.

(╯°□°)╯︵ ┻━┻

An attacker with the STEK doesn't need to wait until session resumption is attempted. Session Tickets containing the current session keys are sent at the beginning of every connection that merely supports Session Tickets. In plaintext on the wire, ready to be decrypted with the STEK, fully bypassing Diffie-Hellman.

TLS 1.3 solves this by... not sending them in plaintext. There is no strong reason I can find for why TLS 1.2 wouldn't wait until after the ChangeCipherSpec to send the NewSessionTicket. The two messages are sent back to back in the same flight. Someone suggested it might be not to complicate implementations that do not expect encrypted handshake messages (except Finished).

1 + 2 + 3 = dragnet surveillance

The unfortunate combination of these three well known flaws is that an attacker that obtains the Session Ticket Encryption Key can passively decrypt all connections that support Session Tickets, resumed and not.

It's grimly similar to a key escrow system: just before switching to encrypted communication, the session keys are sent on the wire encrypted with a (somewhat) fixed key.

Passive attacks are the enablers of dragnet surveillance, what HTTPS aims to prevent, and the same actors that are known to engage in dragnet surveillance have specialized in surgical key extraction attacks.

There is no proof that these attacks are currently performed and the aim of this post is not to spread FUD about TLS, which is still the most impactful security measure on the Internet today despite all its defects. However, war-gaming the most effective attacks is a valuable exercise to ensure we focus on improving the important parts, and Session Tickets are often the single weakest link in TLS, far ahead of the CA system that receives so much more attention.

Session Tickets in the real world

The likeliness and impact of the described attacks changes depending on how Session Tickets are deployed.

Drew Springall et al. made a good survey in "Measuring the Security Harm of TLS Crypto Shortcuts", revealing how many networks neglect to regularly rotate STEKs. Tim Taubert wrote about what popular software stacks do regarding key rotation. The landscape is bleak.

In some cases, the same STEK can be used across national borders, putting it under multiple jurisdictional threats. A single compromised machine then enables an attacker to decrypt traffic passively across the whole world by simply exfiltrating a short key every rotation period.

Mitigating this by using different STEKs across geographical locations involves a trade-off, since it disables session resumption for clients roaming across them. It does however increase the cost for what appears to be the easiest dragnet surveillance avenue at this time, which is always a good result.

In conclusion, I can't wait for TLS 1.3.

Let's block ads! (Why?)

Read the whole story
20 days ago
20 days ago
Share this story

Snowflake macro photography

1 Share
Russian version: Как сфотографировать снежинку?

My main hobby is taking closeup snowflake pictures. Real snow crystals are amazing objects for macro photography, thanks to their beauty, uniqueness and unlimited diversity. Even after eight winters of regular photo sessions, seeing thousands of snowflakes in all their details, i do not get tired to admire new crystals with amazing form or an incredible inner pattern.

Some people think that snowflake photography is a complex matter, and requires expensive equipment, but in fact it can be inexpensive, very interesting and quite easy, after some practice.

Currently, i use low cost variation of well-known lens reversal macro technique: compact camera Canon Powershot A650is at maximum optical zoom (6x) shoots through lens Helios 44M-5 (taken from old film camera Zenit, made in USSR), reversely mounted in front of built-in camera optics. Compared to Canon A650 standard macro mode, this simple setup achieves much better magnification and details, lesser chromatic aberrations and blurring at image corners, but also very shallow depth of field.

I capture every snowflake as short series of identical photos (usually 8-10, for most interesting and beautiful crystals - 16 shots and more), and average it (after aligning, for every resulting pixel take arithmetical mean of corresponding pixels from all shots of series) at very first stage of processing workflow. Averaging technique dramatically reduces noise and reveals thin and subtle details and color transitions, which almost unseen in every single shot from series, because they masked by noise.

Artist website, RedBubble.com">Snowflake picture: Rigel - stellar dendrite snow crystal with sharp and pointy arms, glittering on dark grey textured background
Snowflake photo: Rigel

Artist website, RedBubble.com">Snowflake picture: Gardener's dream, large stellar dendrite crystal with massive tree-like branches and sectored center on light blue background
Snowflake photo: Gardener's dream

* More than 100 different snowflake prints available at artist website.

My camera works with CHDK - Canon Hack Development Kit: this is resident program, which works together with firmware and expands camera functionality. Once installed on SD card, it starts automatically when camera turned on. This "alternate firmware" is really wonderful thing, which turns compact camera into powerful tool, capable of RAW writing, exposure bracketing for HDR and focus bracketing for focus-stacks, executing scripts on BASIC-like language and many more. Installing and uninstalling CHDK is easy and non-destructive process. I highly recommend CHDK to photographers with compatible cameras (it supports lots of Canon compacts).

CHDK is not necessary for snowflake macro photography, but it is very useful, because it support RAWs as well as standard JPEGs and able to execute scripts. I use script Ultra Intervalometer with zero delay between serial shots. With this setting, it works as continuous series with auto focusing before each shot. Re-focusing helps with small shifts of camera and/or snowflake, which happens very often.

Equipment and place

The necessary equipment is not expensive (i already had everything i need: a camera, lens Helios 44 and all other components, so this macro gear costs me nothing).

I think, almost every compact camera is suitable for this simple setup, especially one with good optical zoom and high resolution sensor. Instead of Helios 44, many other external lens can be used. I've successfully tested also Industar 50 and Zenitar 2/50 (both lens also was manufactured in USSR for film SLR camera Zenit). For testing macro capabilities, you can simply hold external lens in front of camera, working in maximum zoom mode, and take a few test photos. In my case Helios works fine, and i even managed to capture some nice pictures of insects and spiders, holding external lens in hand (though this was not comfortable without mounting lens in front of camera some way).

The smaller the focal length of the external lens and bigger - of built-in camera's optics, the greater magnification is achieved, but less depth of field is obtained. Compact camera, with a sensor of small physical size, have an advantage over DSLRs in the depth of field and mobility, allowing you to take pictures quickly and easily change the location and shooting angle. But small sensor have much higher noise level.

My shooting place is open balcony of my house. Less than half of it covered by roof, other part is under open sky. When the snow is light, i photograph on the open part, choosing the most beautiful and interesting snowflakes fell on the background, and clean background periodically, when it becomes covered by snow. When snowfall is heavy, usually i photograph under the roof, bringing the background under the snow for short time to collect new crystals. I'm lucky that i have such nice place, where nobody disturbs me and i can return into house when i freeze.

First efforts

Initially, two findings in the web inspired me to try snowflake photography:

First one was famous site SnowCrystals.com by Kenneth G. Libbrecht, professor of physics at the California Institute of Technology (Caltech). I could not believe my eyes when i first saw his photographs of snowflakes - so amazing and beautiful they are. For me, snowflakes by Kenneth Libbrecht, Don Komarechka, and several other excellent photographers (mentioned further), are standard of quality and an ideal to aspire.

But, as many people, who saw really good snowflake photos first time, i thought that it is impossible to capture something like this for amateur photographer, without any experience and expensive microscope. Now i know that this is completely wrong! Every photographer with simple point-and-shoot camera can take very good snowflake pictures. For this type of photography, patience, persistence and luck mean much more than any expensive photo technique. It is necessary to wait for good snowfalls, which brings a large number of interesting and beautiful snowflakes. They happens not so often (at least, in Moscow), but just one lucky day can give you lots of wonderful photographs, worth weeks and months of waiting and capturing only non-interesting specimens.

For example, my best snowflake photos from winter 2013-14 was taken in two successful days: 16 and 26 January, though i photographed during the whole winter at every opportunity:

Second discovery convinced me that. On russian photography site i stumbled across two photographs of snowflakes (sorry, can't remember author's name). These were photos on a dark woolen fabric (material that is often used to capture snowflakes, one of my favorite backgrounds: it have several important advantages). Against this background, snowflakes looks very impressive - like precious gems in a jewelry store. And this beauty was photographed by a conventional compact camera without a microscope! From that day i was waited for the winter like never before, to try snowflake photography by myself.

In the beginning, in December 2008, i started photograph snowflakes in standard macro mode of camera, without any optical extensions or any tricks. I just tried several backdrops: colored plastic folders for paper, dark green carpet, and black wool fabric. Canon Powershot A650 have 12-megapixel sensor and good macro mode, in which it can focus from 1 centimeter from the lens. That's enough to get good pictures of snowflakes, but in very low resolution: it is necessary to cut out a small central part with the object and some surrounding background from the whole frame:

Source snowflake photo, 4000 x 3000Processed picture of snowflake, 800 x 600
Darkside, real snowflake source photo on dark woolen fabric - Alexey KljatovArtist website, RedBubble.com">Darkside, real snowflake photo after processing - Alexey Kljatov

Depending on the size of the snowflake (it can vary in a very wide range) size of the finished picture was from 640 x 480 to 1024 x 768 pixels, no more. This was only suitable for the web or collages, but not for prints.

Here is some examples of close-up snowflake photos, taken in standard macro mode: old snowflake shots, 2009-2011.

Dark and bright photos

In those days i photographed snowflakes in two ways, and today i use both with some improvements:

1. For dark images with bright snowflake i use dark background (at first, it was a green rug for footwear, made of artifical fibers, later - black woolen fabric) and natural light of cloudy sky. Background laid out on a stool, and as soon as i see among the fallen snowflakes good and interesting crystals, i photograph them at an angle, touching background with both hands and the bottom of camera for steady shots (light on cloudy winter days rather dull, and shutter time is not too short). Camera shoots in macro mode from the minimum possible distance at which it can focus. In camera settings i choose to focus on a small central area instead of the standard auto focusing on multiple zones - it allows to focus precisely on the center or front edge of snowflakes; and set exposure metering mode to central spot, instead of evaluating on whole frame (otherwise small bright snowflake on a dark background will be overexposed; alternative ways is to use negative exposure compensation, or set shutter speed manually).
2. For images with bright background and transparent silhouette of snowflake - shooting on glass surface with backlight. On the floor of the balcony i put upturned legs up stool, on its legs put four pieces of foam rubber (anti-slip), and over them - a sheet of glass. To shooting aimed straight down without any camera shake, i make a simple replacement of tripod: i took small plastic bottle, and from middle part of it cut out cylinder with height of 5.5 centimeters. That height is selected in such a way that the camera lens, pushed in, did not get to the bottom 1 centimeter (this is minimum focusing distance of Canon Powershot A650 in macro mode). When snowflakes falls at the glass, i put this tube with camera over a chosen snowflakes, and shoot with 2 seconds delay (using custom timer function in camera menu), so i have time to move out hands from camera for steady shot.

With left hand, i light snowflake from below of the glass with LED flashlight. I put white plastic bag on flashlight - it serves as diffuser, making LED light more uniform. Light is strong enough to photograph even at night, with lowest ISO and short shutter speeds. If flashlight pointed straight up, we will see on camera screen only dark outlines of snowflake and its internal structure on a light background (it looks pretty boring, at my taste). If we move flashlight a little sideways, and point it to snowflake at some angle, snowflake silhouette becomes volumetric, with dark and light contours. This looks more interesting and shows internal structure of snow crystal much better. I cool sheet of glass outside at least 15 minutes before start photograph, otherwise it melts snowflakes. When the glass is covered with a layer of snow, i remove it with dry towel.

From 2013, for photos on glass i use simple multi-colored lighting: instead of white diffuser, i put on flashlight a fragment of plastic bag with some color pattern or text (for example, white / orange or blue / white / pink). Because i hold flashlight at some distance from glass with snowflake, it is completely out of focus; resulting picture will contain only smooth color gradient at background and snow crystals with multi-colored facets. Even if i do not like colors on source photos, this is not a problem: i can easily convert existing colors to more pleasant variants, applying custom contrast curves on channels A and B in LAB color space.

Averaging technique

Since 2011, i capture every snowflake as short series of identical shots. When processing, this series averaged and merged into one picture. This technique dramatically reduces noise (improve the signal / noise ratio of the image) and reveals weak and subtle details, masked in each single photo by noise. Here is article about averaging technique.

Macro setup with external optics

In 2012, i started using external optics for better magnification of snow crystals, and built simple optical add-on for my camera. This macro rig based on lens Helios 44M-5, taken from old camera Zenit:
Lens Helios 44 magnification

At first, i pick narrow wooden board (around 30 centimeters long and 6 centimeters wide) and temporary attached lens Helios 44 to one end, mounted reversely: back lens to object, front lens to camera. Then, i placed camera on board next to Helios, turned it on and used optical zoom mode at maximum (6x). I aligned camera so that internal lens almost touched Helios and both lenses were on the same optical axis. Then, i mounted Helios permanently (with duct tape, using many layers), and drilled through board an opening for screw, suitable for camera's tripod socket. I attached the camera with a screw from below and additionally with small metallic bracket, glued to the side of wooden board: it holds opposite side of camera, so it didn't move when attached. Camera attaches and detaches easily and quickly. On back side of Helios (which is front side of whole macro setup) i attached three standard narrow extension rings from camera Zenit (this is necessary only when shooting vertically on glass, in other cases i detach them). These rings keeps lens Helios at needed focusing distance from snowflakes on glass (2,5-3 centimeters in my case). The point of contact of internal and external lenses i covered with sleeve, created from black plastic bag: it protects this zone from outer light, snow, ice and drops of water. Also, i glued thin rubber at front edge of board (below Helios) and on outer extension ring, which stands on glass: this is for anti-slip.

The whole construction has turned out quite firmly and steadily stands vertically on extension rings. I used it 3 winters, and it does not require any repairing. I simply put this setup on glass over the chosen snowflakes and photograph in maximum optical zoom mode. Auto-focus works fine, camera focuses through external optics without any issues.

Here is scheme of this macro setup (larger image opens on Flickr):

Scheme of snowflake macro setup, based on compact camera and lens Helios 44

Assembled rig (definitely, not the most beautiful thing in the world):

Snowflake macro rig, assembled from compact camera Canon Powershot A650is and lens Helios 44: side view

Snowflake macro setup, assembled from compact camera Canon Powershot A650is and lens Helios 44: front view

All ready, needs only snow:

Snowflake macro photography

Snowflake photography on opaque backgrounds

This setup i also use to shoot on dark opaque background (with detached extension rings). For these photos i use natural light of cloudy sky. I shoot at angle: put rear side of board on small desktop tripod (Continent TR-F7). On top of tripod, i attached small and flat piece of wood for better stability. Bending flexible legs of tripod, i easily adjust angle, at which camera pointed at snowflake. Near side of board (with front lens) lays at background. I capture serie for one crystal, then quickly move and point macro setup to next snowflake.
Snowflake macro setup, based on compact camera Canon Powershot A650is and lens Helios 44: photography at angle against dark woolen background

My main background for these shots is black woolen fabric. It have very useful properties, thanks to thin, rigid and springy fibers, rising from the fabric and going in all directions. First: when snowflakes falls to such tissue, usually they hanging in the air, touching the fibers only at few points, and this slows down melting. Second: snowflakes often stuck and held by fibers so well that the wind can't blow them away, or even move. Third: when needed, it is easy to move or lean snowflake in desired direction with toothpick, and usually crystals held in new position by fibers and do not fall or shift. But, despite the practicality, i do not particularly like the look of this background in the photographs. When process photos, often i am trying to clean up most distracting fibers near to snowflakes.

When capturing snowflakes at an angle, immediately becomes apparent that the depth of field is too small even if we use most narrow aperture. The obvious solution is to use focus stacking technique, at least with 3-5 shots, although this is not easy to shoot large series for combined focus stacking and averaging techniques, when our object quickly melts and sublimates. Alternate solution is easier: when snowflake rests on the fibers of wool fabric, it is not difficult to lean it into the desired position. So it is sufficient to aim macro setup at crystal, and then, looking from the side, change the slope of snowflake with a toothpick so that it will lie parallel to the plane of the lens - and in this case it will fit in focus completely. Despite its apparent fragility, snowflakes surprisingly strong and usually withstand a few touches by toothpick without any visual signs of damage.

I constantly use wooden toothpicks for this photography, and always keep in the cold a few pieces in reserve. Besides of tilting snowflakes to the desired angle, i use them to raise snowflakes, too deeply seated in the wool fibers, as well as moving out crystals that fell next to the selected for shooting, and even separate snowflake clusters to single crystals.

Camera settings

I set these camera settings: shutter priority mode (TV); exposure time: 2 seconds (this is much longer shutter speed that camera would select with auto settings); focusing by single central area, instead of multiple zones; exposure metering mode to central spot, instead of evaluating on whole frame (otherwise snowflake on dark background will be overexposed). All other settings on Auto.

With these strange settings, camera tries to lower exposure (for keeping it normal at central spot with bright snowflake). Camera lowers ISO to minimum and selects most narrow aperture; but exposure still too high, and finally, camera lowers exposure time. The result is exactly what we need: photos with correct exposure, minimum noise and maximum possible depth of field. Of course, it is possible to set all the parameters in manual mode (M), including a minimum ISO and most narrow aperture, but then we have to set the shutter speed manually and adjust it when lighting conditions changed.

But not every Canon compact camera will work like this: for example, Canon G10 does not adjust the shutter speed set in TV mode, and over-exposures photos with such settings. For these cameras will likely be optimal to set all parameters in manual mode instead of TV.

For photos on glass i use the same parameters, except for starting shutter speed in 1 second (in this mode we have much more light) and exposure metering - center-weighted or evaluative: usually we have bright background with slightly darker snowflake, and exposure metering by central spot can lead to overexposed background.

Shooting strategy

In most cases, i capture snowflakes instantly when they fall on background (wool fabric or glass), without moving them anywhere, and only adjust their placement with toothpick, if it needed. This allows me to shoot fast and take lots of photos. Although the percentage of bad quality images is quite high, and often background contains many unfocused crystals and ice debris (they have to be removed at processing stage), still i can get a decent number of good shots in a limited time. This is important if good snow does not last long.

An alternative method also gives good results: keeping clean background somewhere under a canopy and move the best crystals on it one by one, using a fine watercolor brush. Transferring snowflakes with brush - a fairly simple process, it requires no special skill. We can use very large collection board under the snow, and will have much more interesting crystals to choose from. The disadvantage of this method - a rather slow shooting process.


Some unprocessed, straight out of camera JPEGs to compare: standard macro mode vs using external optics. Please click at images to open them in full 12-megapixel resolution:

Standard macro modeWith Helios 44
Snowflake photo on glass surface with LED back light, by Canon Powershot A650is in standard macro modeSnowflake photo on glass surface with LED back lighting, macro setup with Canon Powershot A650is and reversed lens Helios 44
Snowflake photo on dark woolen background by Canon Powershot A650is in standard macro modeSnowflake photo on gray wool backdrop by macro setup with Canon Powershot A650is and reversed lens Helios 44

Also i prepared another before and after comparison table with unprocessed, straight out of camera photos and final snowflake pictures.

Here is summer photos, taken with this macro setup (portraits of house fly and common wasp, surface of strawberry with visible cell structure and pollen grains of Alcea Rosea):

Macro photo: house fly with hair, beard and eye facets against blue backgroundMacro photo: portrait of common wasp with hair, jaws and eye facets against blue background
Macro photo: surface of strawberry fruit with seeds and visible cell structureMacro photo: pollen grains of Alcea Rosea, or common hollyhock, against black background


Shooting is fast and easy, but my processing workflow takes significant time and effort. I'm trying to get the most quality out of available sources, and make picture with low noise, but preserving all possible details. At first stage, i convert source shots from RAWs to TIFFs, then align and average series for selected crystal. Then i work with sharpening, additional noise removing, cleaning background from ice debris, unfocused crystals and other distracting elements, color toning (i prefer adding blue colors to my snowflake pictures: in most cases, source shots looks too monochrome and not appealing, at my taste) and finally, contrast curve. Also, my workflow includes manual drawing precise mask, needed for separating crystal from background. This mask used for process crystal and background with different sharpening and noise removing settings. Drawing these masks requires patience, accuracy and lots of time, especially for big and complex snow flakes.

Snowflake pictures

This is closeup snowflake images, taken with new macro setup and post-processed:
Artist website, RedBubble.com">Picture of snowflake: Oak leaves or feathers, hexagonal plate crystal with amazing internal structure, glittering on smooth gradient background
Snowflake photo: Oak leaves or feathers?

Artist website, RedBubble.com">Snowflake macro photo: Flower within a flower, small star-shape snow crystal with unusually complex and dense pattern inside central hexagon
Snowflake photo: Flower within a flower

Artist website, RedBubble.com">Snowflake photo: Ice relief, real snow crystal with amazing relief surface and ring pattern in center
Snowflake photo: Ice relief

Artist website, RedBubble.com">Snowflake picture: Sunflower, real snow crystal with large, flat and empty central hexagon, six short, broad arms with relief glossy surface, glittering on smooth gray - blue gradient background in cold light
Snowflake photo: Sunflower

Artist website, RedBubble.com">Group of snowflakes: Snow Queen's capacitors, real snow crystals of hollow column type with cavities in shape of hourglass, on light blue background
Snowflake photo: Snow Queen's capacitors

Artist website, RedBubble.com">Ice dust 2, real snowflake macro photo on glass with LED back light - Alexey Kljatov

Artist website, RedBubble.com">Snowflake photo: Majestic crystal, real snow crystal of fernlike dendrite type with complex structure and fine symmetry, six large, elegant arms with lots of side branches and small icy leaves, sparkling on smooth gray - blue gradient background
Snowflake photo: Majestic crystal

Artist website, RedBubble.com">Jewel, real snowflake macro photo on dark woolen fabric - Alexey Kljatov

Artist website, RedBubble.com">Snowflake image: Web, hexagonal plate crystal with pattern, resembling spider web, glowing on dark woolen background in diffused light of cloudy sky
Snowflake photo: Web

Artist website, RedBubble.com">Snowflake macro photo: Capped column - unusual snow crystal with massive icy column and small hexagonal caps on opposite ends
Snowflake photo: Capped column

Artist website, RedBubble.com">Almost triangle, real snowflake macro photo on dark woolen fabric - Alexey Kljatov

Artist website, RedBubble.com">Snowflake photo: Alien's data disk, hexagonal plate crystal with amazing ring pattern, resembling pits and flats of CD recording, sparkling on dark textured background in diffused light
Snowflake photo: Alien's data disk

Artist website, RedBubble.com">Closeup snowflake picture: Hex appeal, hexagonal plate with beautiful transparency, glowing on dark textured background in natural light
Snowflake photo: Hex appeal

Artist website, RedBubble.com">Picture of snowflake: very clear and symmetrical snow crystal with broad arms, glittering on dark blue textured background
Snowflake photo: The core

Artist website, RedBubble.com">Snowflake macro photo: Ice crown, real snow crystal with massive broad arms and amazing central pattern, resembling coat of arms with big shield and spears
Snowflake photo: Ice crown

Artist website, RedBubble.com">Real snowflake macro photo: Starlight, beautiful stellar dendrite crystal with sharp ornate arms, resembling swords
Snowflake photo: Starlight

Artist website, RedBubble.com">Snowflake macro photo: Less is more, small hexagonal plate snow crystal with surprisingly simple pattern, sparkling on pale grey background
Snowflake photo: Less is more

Artist website, RedBubble.com">Beneath a steel sky, real snowflake macro photo on glass with LED back light - Alexey Kljatov

Artist website, RedBubble.com">Snowflake photo: Cold metal, real snow crystal with glossy surface, broad arms and large central hexagon with relief details, sparkling on brown-blue gradient background in cold light
Snowflake photo: Cold metal

Artist website, RedBubble.com">Snowflake macro photo: Flying castle, real snow crystal with beautiful shape and relief glossy surface, sparkling on light brown-blue gradient background
Snowflake photo: Flying castle

Artist website, RedBubble.com">Snowflake image: Massive gold, real snow crystal with relief and glossy surface, sparkling on bright blue-orange gradient background in warm golden light
Snowflake photo: Massive gold

Artist website, RedBubble.com">Snowflake photo: Frozen hearts, tiny hexagonal plate with unusual pattern of hearts around center on pink gradient background
Snowflake photo: Frozen hearts

Artist website, RedBubble.com">Snowflake photo: Heart powered star, real snow crystal of star plate type with rare and unusual ring pattern of six heart-shape elements, connected to corners of central hexagon, glittering on bright pink - gray gradient background
Snowflake photo: Heart powered star

Rainbow snowflakes

This is snowflakes with clearly visible thin film interference effect (it described in Wikipedia). Same effect produces rainbow colors in soap bubbles, for example. Unlike bubbles, in snowflakes this effect can be seen only occasionally: if snowflake contains air cavities in the center, and interleaved layers of ice and air are very thin:
Several high resolution collages:

All these snowflakes, and many other (more than hundred) are available as prints in these print-on-demand services:

Artist website (and its mirrors at Pixels.com and FineArtAmerica.com), powered by FineArtAmerica / Pixels: one of the largest, most-respected giclee printing companies in the world with over 40 years of experience producing museum-quality prints. All prints are produced on state-of-the-art, professional-grade Epson printers:

Prints also available at RedBubble.com and Society6.com.

If you are interested in commercial use of my photos, please mail me at chaoticmind75@gmail.com. Also, commercial licenses available at: Shutterstock.com, 500px.com, iStockPhoto.com, Fotolia.com.

Recently i created a page with snowflake wallpapers. All wallpapers available in different screen proportions (4:3, 5:4, 16:10, 16:9), resolutions from SVGA, 800x600 to Ultra HD 4K, 3840x2160 pixels:

Snowflake photo wallpapers, up to Ultra HD 4K, aspect ratio: standard and widescreen, 4:3, 5:4, 16:10 and 16:9, free download

If you want to see more snowflakes, you can browse through all snowflake photos, starting from most recently added:
Square grid collage with numerous photos of real snowflakes - Alexey Kljatov (Алексей Клятов) aka ChaoticMind75

Snowflake video, animations and vectors

In this video by NASA you'll see my snowflake photos as examples of real snow crystals:

Direct link on YouTube

Here is other snowflake videos. And here is some GIF animations that i assembled from series of still photos: snowflake melting and sublimating processes.

Also, here you'll find vectorized snowflake pictures in EPS / SVG formats (and high resolution PNG images, rasterized from vectors).

See also

If you are interested in snowflake photography, i recommend to view also:

• Biography of Wilson Bentley, pioneer of snowflake photography, his amazing photos and short documentary film about him;

Don Komarechka, Canadian based professional photographer, who recently published excellent book about his way of snowflake photography, physics of ice crystals formation and many other interesting topics:

Don Komarechka «Sky Crystals: Unraveling the Mysteries of Snowflakes» book cover

Also, don't miss this ultra high resolution poster, created by Don: «The Snowflake».

The Snowflake, ultra high resolution poster by Don Komarechka

And here is article by Don Komarechka about his way of snowflake photography with DSLR camera and ring flash.

SnowCrystals.com, created by Kenneth G. Libbrecht, American professor of physics - great resource about snowflake physics and photography. I especially recommend section about snowflake classification with photo examples of each type: Guide to Snowflakes. Also, Kenneth's fantastic snowflake collection available on Flickr. I especially recommend his album Designer Snowflakes with amazing and extremely beautiful crystals, artifically grown in laboratory.

• My mother shoots snowflakes with similar technique (i've built another macro setup for her Canon G10, similar to my own). You can see her snowflake album on Flickr.

• And snowflake albums of other excellent photographers on Flickr:

  • Pamela Eveleigh
  • Fred Widall
  • Mark Cassino
  • David Drexler
  • Linden Gledhill
  • Jessica D
  • Detached Retina
  • Peter O'Brien
  • Josh Shackleford

If you would like, you can subscribe to my blog by Email. I constantly work with snowflake, HDR and light painting photos, and you'll see new pictures and wallpapers instantly after publication:

Emails delivered by Google Feedburner service, and you can unsubscribe any time.
Author: Alexey Kljatov (E-Mail: chaoticmind75@gmail.com)
Next article:

Light painting

Still life light painting photo technique with simple equipment: compact camera, tripod and LED flashlight in dark room

Let's block ads! (Why?)

Read the whole story
27 days ago
Share this story
Next Page of Stories