Do you really whant to make animated movies?

Posted in cinema with tags , , , on 24 gennaio 2016 by realuca

Film preservation >> The Digital Dilemma

Posted in cinema with tags , , , on 25 ottobre 2015 by realuca

As the death of film accelerates, the terms and stakes of the battle are changing rapidly, in ways that aren’t well understood outside the small community of archivists working directly in the field. Digital technology offers a chance for perfect, lossless preservation, but only at significant financial cost, and higher risk of catastrophe.

The end product of the production pipeline isn’t an analog print, but a file known as the Digital Intermediate (or Digital Source Master). In the long term, the DSM has one huge advantage over a photochemical negative: as long as the data is preserved, it’s perfect.

If an analog version is preferred for aesthetic reasons, producing a new analog print from the digital information will yield better results than trying to preserve a photochemical print over the very long term. But in the near-term, preserving that information is significantly more difficult and expensive than preserving film.

The added costs haven’t been a secret—as early as 2007, when the death of photochemical film remained an open question, the Science and Technology Council of the Academy of Motion Picture Arts and Sciences surveyed the major studios and published its findings in a report called The Digital Dilemma. And yet what the report found was an environment in which long-term planning for preserving digital information was not being done, in which the existing technology wasn’t adequate for archival needs, and finally, in which film preservation would require “significant and perpetual spending” far above what was necessary for analog preservation.

The added cost and difficulty arise from a major conceptual difference between digital and analog preservation: In digital preservation, the media isn’t itself the object that needs to be preserved. The original camera negative of a film is a unique, irreplaceable object—any copy is inferior to the original. With a digital production, it doesn’t matter in the slightest whether the digital files are written to a hard drive, magnetic tape, flash drive, or Jaz disk. Archivists can use whatever media best suit their needs. But right now, there aren’t any great options.

The most commonly used format for digital archiving is Linear Tape-Open (LTO) technology, a magnetic tape format that is most commonly used for enterprise data backups. LTO tapes are more stable than hard drives, which are subject to mechanical failure, but they’re far from ideal. Although it’s estimated that they have a 15-to-30-year lifespan, most studios assume a practical lifespan of five years. It isn’t simply an issue of tape degradation, either: The drives that read the tapes are also subject to obsolescence. Since 2000, new generations of LTO technology have been released every two years or so—new tapes and new drives—and they’re only backward-compatible for two generations. So a film that was archived to tape in 2006 using then-state-of-the-art LTO-3 tapes can’t be read by the LTO-6 drives that are for sale today.

The practical result of this is that a digital film archive needs to invest heavily in data migration to maintain its assets. Every five years or so, each film needs to be copied to new media, in a constant race against magnetic-tape degradation and drive obsolescence.

See more on thedissolve

What we know as reality actually is a simulation of reality

Posted in cinema with tags , , , , , , , , , on 18 ottobre 2015 by realuca

test

Observation is interpretation

Language is a means of control that lock into traditional ways of thinking

Fight dictatorship of social superstructures with unexpected juxtapositions

How movies manipulate emotions with color >> And why Teal and Orange

Posted in Cinema e Fotografia with tags , , , , on 14 ottobre 2015 by realuca

For review, complementary colors are:
Red and green;
Yellow and purple;
Blue and orange.

One of the rules we learn as painters is that color opposites cancel each other out when mixed. In other words, when we combine a pair of complementary colors on our palette, the original or parent colors lose their intensity or chroma. They mix into a black or brown.

When you study each of these chromatic scales, you can see how just a small amount of the complementary color starts to de-saturate the parent color; the intensity of the color or chroma immediately begins to decrease when its complement is added.

See more on munsell

The big change that digitization made was it made it much easier to apply a single color scheme to a bunch of different scenes at once. The more of a movie you can make look good with a single scheme, the less work you have to do. Also, as filmmakers are bringing many different film formats together in a single movie, applying a uniform color scheme helps tie them together.

One way to figure out what will look good is to figure out what the common denominator is in the majority of your scenes. And it turns out that actors are in most scenes. And actors are usually human. And humans are orange, at least sort of!

Most skin tones fall somewhere between pale peach and dark, dark brown, leaving them squarely in the orange segment of any color wheel. Blue and cyan are squarely on the opposite side of the wheel.

Unlike other pairs of complementary colors, fiery orange and cool blue are strongly associated with opposing concepts — fire and ice, earth and sky, land and sea, day and night, invested humanism vs. elegant indifference, good old fashioned explosions vs. futuristic science stuff. It’s a trope because it’s used on purpose, and it does something.

See more on priceonomics

Color and Light – Linear and Log >> Human vs Video

Posted in cinema with tags , , , , , on 13 ottobre 2015 by realuca

What We See

Human vision is complex: not only do we have a varying capacity to see colour and light, we also process what we see through our brains, which add layers of interpretation to colour and light.

These two types of cells do not exist in equal portions nor are they distributed evenly in our eyes. The cells that see colour and require bright light are fewer in number and are concentrated in the centre of our vision. The cells that see in dim light are more numerous and are concentrated primarily around the edges of our vision.

Whether the light gets darker or brighter, the decline in what we see is very gradual. We can see details in bright light, and will see colour, if not fine detail, into the very brightest of highlights. Our ability to distinguish colours and details declines gradually as the light fades, but we are able to detect motion and see shapes into very deep shadow.

What the Camera Gets

What a camera “sees” can be described simply: a camera’s sensor records a narrow range of light and colour, and the photo receptors respond uniformly across the field of view. Photo receptors do not desaturate colour in shadows, nor do they record more details as light gets brighter. Similarly, photo receptors do not record more colour in the centre of field of view. Each photo receptor, regardless of location on the sensor, will record colour and light as they exist within the sensor’s range of luminance. Further, a sensor’s ability to record colour and details simply ends at either end of a sensor’s range of luminance. Highlights clip to white and shadows clip to black.

Trichromatic Theory

It shouldn’t be a surprise, then, that all colours in luminance output devices (cameras, computer monitors, projectors, and so on) are composed with varying combinations of red, blue, and green. Because RGB are the colours of light, if you add all three colours together, you get white. Subtract all three colours and you get black. That is the basis of the RGB colour model.

The print colour model—CMY—is the inverse of the RGB model and, thus, also based on the trichromatic theory. CMY are the colours of print. Ink absorbs certain wavelengths of light, and reflects others, to create colour. If you subtract each of red, green, and blue from white, you get the colour opposites: cyan, magenta, and yellow, or CMY. If you add all three colours (CMY) together, you get (almost) black.

Opponent Process Theory

The opponent process theory suggests that the cone cells of our eyes are neurally linked to form three opposing pairs of colour: blue versus yellow, red versus green, and black versus white. When one of the pair is activated, activity is suppressed in the other. For example, as red is activated, we see less green, and as green is activated, we see less red.

If you stare at a patch of red for a minute, then switch to look at an even patch of white, you’ll see an afterimage of green in the middle of the white. This is the opponent process at work in your vision. The reason we see green after staring at red is because by staring we have fatigued the neural response for red. This allows the neural response for green to increase.

See more on tutsplus

Colour on computers is a minefield. Our eyes perceive light in a different way to a camera, settings like Gamma can completely mess-up the way mixing colours are displayed.

LIGHT IS LINEAR

Basically if you double the energy emitted from a light source, and the distance from that light source stays constant, the light intensity at that point will also double. Easy right?

THE INVERSE SQUARE RULE

Imagine a single light source in a massive dark room. Standing right next to the light, you’ll experience the highest light intensity possible. Moving to the far end of the room, you’ll experience the least intensity in the room, because the light intensity diminishes over distance.

However, it doesn’t diminish linearly as distance increases. If you stand half way between the light source and the far end of the room, the light won’t be half as bright; it will actually be approximately a quarter as intense. The light intensity is inversely proportional to the square of the distance from the light source.

HUMAN PERCEPTION IS NOT LINEAR

This is the real world physics of light. However, our perception of luminance is quite different and that is important when it comes to how we map real world linear luminance values to perceived brightness. We are more sensitive to small changes in luminance at the low end of the scale than the high end.

THE GAMMA CURVE – LINEAR VS LOG

By encoding luminance non-linearly, using a more or less logarithmic curve we can assign a higher number of smaller increments to the low and mid end of the brightness scale, and fewer larger increments all the way high into extended highlights.

A normal idealized gamma curve is actually almost a straight line, and this linear mapping will divide values perfectly evenly between 0 and 1023 across the scale of linear luminance, so the mid point of 512 will be exactly half way between black and white, which is 50% grey right? Wrong. A value of 512 will actually be about 75% grey. There will be far fewer values mapped to the dark end of the scale than the bright end with linear mapped values.

See more on dcinema

In digital photography we are fundamentally concerned with brightness (luminance) in a scene that needs to be converted into a coded value (dependent on bit-depth) of video signal strength (sometimes represented in milivolts: mV) in order to reproduce an image. To make it simple we can say that a digital camera will assign a number to a specific amount of brightness in a scene and that number will be output as voltage. On-set we can view the intensity of this voltage by running our video signal through a waveform monitor and noting its IRE value. A digital camera’s ability to interpret variations in light intensity within a scene is directly related to its bit-depth. The bigger the bit-depth the more luminance values a camera can discern. An 8-bit camera can discern 256 intensity values per pixel per color channel (RGB). A 10-bit camera can discern 1024 values. A 12-bit: 4096. And a 14-bit sensor: 16,384. It’s easy to see why bit-depth has a huge role in a camera’s dynamic range.

A digital camera encodes these luminance values linearly. That is, for every discreet step of difference in luma, the camera will output an equal step of difference in voltage or video signal.

The human eye is sensitive to relative, not discreet, steps of difference in luma. For example, during a full moon your eye will have no problem in discerning your immediate surroundings. If you were to light a bonfire the relative illumination coming from the flames would certainly overpower the moon light. Inversely, if you were to light that same bonfire during high noon you would be hard pressed to notice any discernible increase in illumination. This is why we use f-stops (the doubling or halving of light) to interpret changes in exposure.

What we can learn from the difference between linear and logarithmic responses to luminance is that a linear approach will be able to discern more discreet values in the highlights of an image while a logarithmic approach can discern more subtleties in the shadows. This is because a digital camera only has a finite number of bits in which to store a scene’s dynamic range and most of those bits are used up to capture the brightest portions of the scene.

See more on thedigitalparade