Daily practice - Finding Awe in the City

As my current research and project development relate to awe (or lack there of), nature deprivation and hypernatural environments, I decided to run two daily experiments in parallel.

First, I would attempt to find awe in the city, I didn’t know if awe-inspiring sights would end up being natural or man made, but I tried to pay more attention to my environment beyond the mindless morning commute and shoegaze wandering. Second, I tried to artificially insert awe into scenes of my daily life using inspiration from my travels and natural sights that left a lasting impression on me.

As I somewhat suspected, finding natural awe in the city is a very difficult task. The little green hidden in the otherwise concrete jungle is tame, orderly and scaled as to no interfere with the going-ons of the city. Trees are equidistant and manicured, bushes are lined around lawns and pavements, and nature is reduces to mere decoration - hardly an awe-some sight, and quite the opposite, In my search was left with a longing for the wild and ranging mountains I saw when I traveled in New Zealand, Japan and Germany.

I did however find awe and wander in the remarkable architectural achievement situated in the city, from the One World Trade Center, to the adjacent Oculus hall and spreading wings, Empire State building and Grand Central’s main concourse - I even got a chance to see NY’s beautiful skyline from New Jersey’s waterfront.

It is striking though how urban and manufactured these views and structures are, and how the city interferes with the nature that surrounds it. Light pollution blanks the starless skies and the water flows slowly through the lined rivers. I tried to imagine an alternative in which the city blends into nature rather than erase and obstruct it, by simply photoshopping Washington Square Park and NY’s downtown skyline with a bit of natural wander. I have to say that after viewing the (somewhat sloppy) results, I wish I was closer to nature every day.

3D print finishing (and packaging)

Finishing

Over the past few weeks I spent time trying to bring 3D prints to my desired level of finish, a point where you wouldn’t know a piece was 3D printed if you ran your finger on it, or looked at a paint job and didn’t see the obvious layer lines.

Unsurprisingly, that means that I spent most of this week sanding, and priming, and sanding, and priming again (as I detailed in “The Zen of Sanding”). I started with very low grit, 80, to remove all the supports, rough out the adhesion plate and get rid of all the excess material. I moved up to 220 as a first pass to smooth out the print lines - this proved to be the most important pass as all subsequent sanding will rely on the quality of this run and smoothing the print surface as much as possible.

Next I used 440, 660 and 800 to incrementally smooth out the surface of the print. After all the sanding was done, the surface was really smooth to the touch and the scan lines, while still visible, were smoothed. I then repeated the process for all the prints. All 12 of them.

Then came time to prime them white. I stuck to the Rustoleom Plastic Primer and cleaned up the shop’s paint booth. I made a makeshift painting rack that could fit all the toys on two plates so that I could quickly rotate them. I used as light coats of paint as my heavy hand would allow and used a blow dryer in between coats to dry the paint. I didn’t want to rush the final sculpts so the whole painting process took about three hours. In the end, they came out looking pretty smooth with mostly paint texture on the surface, rather than 3D print lines.

The final step was to lightly sand the primer, and they’d be ready for decoration. I was hoping to get another coat of primer in after sanding but was running out of time to hand them out. I used a 1200 grit sand paper and tried to only brush it against the paint to not rub it off completely.

Packaging

I started to design packaging for the toys, focusing on the fox first.I wanted to give each package a sense of the character’s back story or attitude and, since the fox is modeled after the Japanese Bullet Train, I decided to give the packaging some speed.

I started with a simple box and pulled it backwards before unwrapping it, this made it look like a cart speeding forward. I initially wanted to print on the walls but decided to go with windows instead to complete the look and expose the toy.

The Zen of Sanding

This week was all about finishing and painting, and I decided to take the time to try and get a decent finished result from 3D prints. After jumping straight to priming in previous weeks, I took some more time to prep the prints and smooth out the PLA surface before painting.

I started by doing a quick sanding test using 220 grit sand paper. The shapes I was sanding were quite smooth and flat so it was easy to sand them off quite well and get a nice, clean surface. At that stage, I just wanted to quickly go through sanding and priming to get a sense for the process. For this test, I used a gray unlabeled primer from the cabinet and applied a few thin coats in the spray booth.

Though initially things looked promising, a few issues became apparent:

  1. ITPs paint booth is in a wood shop (literally the worst possible location) and must be cleaned and vacuumed before every paint job. The dust and degree floating in the air stick to the paint and ruin the smooth surface.

  2. Simple primer doesn’t stick to PLA and was easily brushed off even after the paint has been drying for hours.

  3. I didn’t sand enough…

I printed the complete sculpts on thicker wall settings, giving me more room to sand the rough surfaces with multiple runs of higher grits. I started with a coarse 80 to remove the supports and bottom adhesion, moved up to 120 to clean up the print ridges, silicone 220 to smooth out corners with the sand paper curved around my finger, and 380 + 440 to get a clean final sand for a smooth surface.

I must say that I was very pleasantly surprised with the results of the sanding, even though the process took a while it was very rewarding to run my finger against the sculpt surface and feel it smooth out a bit more with every pass. Friends who looked at the final pieces couldn’t believe a 3D print can be so polished and smooth.

Since I put a lot of effort into sanding I didn’t want to mess up the paint job again and went to Ace for some guidance. They recommended a special plastic white primer so I got a can and headed to the shop. After cleaning the booth I started applying several light layers, blow drying the paint before moving to the next one. Over all, I applied 6 layers of primer, and this is the result:

I really love how the primer layer came out, even though you can still see the print lines, they give the sculpt a sense of speed and forward movement, making the fox head look almost like sonic the hedgehog!

I’m going to try and spend more time sanding the next batch of sculpts and try to achieve a completely smooth surface, I hope a second color paint job after the primer will fill in the cracks as well.

Materials, composition, and things that go wrong

This week was the dreaded “Amazon is late, 3D printer is busted, I don’t know how to CNC and it’s 3AM" already” week, but I pushed through and made progress on two techniques I intend to combine for the final art toy pieces I’m making.

Digital Kintsugi

The concept for the toys I’m designing is “ownership and emotional connection through repair”. The digitally designed figures are (digitally) broken and the owners must put them back together, making them both unique and more beautiful. This approach is very much inspired by the Japanese tradition of Kintsugi - carefully repairing broken homeware with gold paint, making them more beautiful than the original.

I designed a very basic shape in Houdini and broke it, then separated each piece and 3d printed all of them. Despite setting the print for quick-n-dirty settings, the edges came out quite smooth.

I then moved on to the repair step and wanted to find a way to prototype the Kintsugi look without having to use real good paint. I mixed plastic glue with gold Mica powder and after the powder dissolved the glue looked shiny and consistent. I then followed the model in reverse and glues all the pieces back together.

I was more concerned about testing whether the 3d printed parts will come together well rather than doing a clean job so I ended up getting gold glue all over the place (including my new black pants) but it worked out (!) and I made my first digital Kintsugi piece. While there’s a lot of room for finessing and refinement, I’m super excited about this new combination of traditional craft and digital fabrication, and the concept of manually repairing broken digital items.


Layered Acrylic and working with additive-subtraction(?)

Inspired by the beautiful wooden pieces we saw made of layered skateboards, I was inspired to try out techniques for adding color to material before the subtraction process, and not just as a finishing or painting step. I wanted to layer acrylic of different colors and mill the character parts out of the colorfully grading blocks.

To begin with, I cut 1/16” sheets of acrylic to 4x4” squares in black and white, then applied acrylic adhesive to each layer and quickly glued black and white layers in sequence. I made a total of 3 blocks, 2 tall and one short and clamped them to dry over night. The next morning, I released the blocks and all the knocking sounds and looseness in the material disappeared - it felt like a solid chunk of acrylic.

Unfortunately, I didn’t get my Other Mill bits in time so I couldn’t proceed with the milling, but I learned some cool modeling techniques in Fusion 360, and how to use the CAM module that comes with it. More on this front next week!


Side quest - Other Mill fans

Finally, in preparation for milling the acrylic, I stumbled upon a blog post on Bantam Tools’ website, showing how to make small 3d printed fans that can be mounted on the router bits of an Other Mill, these fans can clear out milled material and make for a cleaner process and smoother finished product. I made a few right away and stashed them until I actually know how to use the Other Mill.

IMG_20180923_190119.jpg

Bodies in Motion - Lab Reports

Week 1 - Rigid Bodies

The first week in the studio lab was spent learning how to setup and calibrate the OptiTrack motion capture system from scratch and how to set up rigid body tracking and streaming to Unreal Engine.

Calibrating the system involves several steps, including masking the tracking area for noise from shiny objects, using the calibration wand to sample the space so that cameras align to each other and the ground plate to align all cameras to the room.

Untitled.png

∂Once the room was tracked we used the readymade rigibbody trackers. These are small objects (either bought or made) that have unique configurations of trackers attached to them. When each group of points is selected in Motive it can be locked as a single rigid body object. We tracked a total of three objects and captured a quick motion scene.

In Unreal Engine, we set up the OptiTrack plugin to receive streaming motion capture data and three rigid-body tracking target. There targets are empty actors that will move any movable objects placed inside them in the scene hierarchy, we tried parenting simple geometric objects as well as more dynamic ones such as motion-reactive particle systems.

to stream the tracking data from Motive all we had to do was play back the scene, and we realized that if we go back to capture mode and press the aptly named “Live” button, live tracking data is sent through to Unreal Engine, allowing realtime performance and simulation feedback.

Week 1 afterthoughts

It was surprising to see how easy it is to set up a tracking system using Motive and streaming to Unreal Engine. It got me thinking of the possibilities for performance with live motion capture data and how to reverse the tracking to not place a person in the scene, but map the scene to a person, a-la “Inori”. I hope the semester will provide an opportunity for more physical-interaction based projects using MoCap data and using it to augment a physical space using projection and lights.

Week 2 - Skeletal Tracking

The second week of the lab was focused on capturing skeletal data and mapping human figures in Unreal Engine. This is a more involved process than dealing with rigid bodies on both the Motive side and Unreal Engine side.

We started, again, by calibrating the studio and then went on to setting up the tracking suits. We mapped each actor with a total of 41 trackers in a predefined tracking setup for posture and toe articulation. Once all the trackers were assigned, all we had to do was set the actors in a T-pose, select all of the floating points in Motive and assign a skeleton, just as we do with rigid bodies. The difficulty was in setting up the suits correctly but Motive has some handy realtime visual guides to show you which points are off.

The Unreal scene was already set for us this week, so all we had to do was set the right naming and ids and a blue demon magically started to move around the scene. The downside was that using skeletal tracking requires setting up a control blueprint instead of using the actors, and both can’t be used at the same time, so our attempts to also track some chairs failed for now.

Once both actors were rigged and tracked we had some time to mess around, we used the (surprisingly smooth) office chairs as moving platforms and experimented in recreating scenes of dance, fighting, explosions, flying and swimming. The simulation renders still like bad green screen keying from an 80’s superhero movie but hey, it’s only the second week!

Week 2 afterthoughts

I’ve been learning how to use Unreal Engine blueprints and set up a simple targeting setup to control multiple actors based on motion data. I want to try and create an interface blueprint that will act as an abstraction between Unreal and the tracking hardware, so that the same project could be run with OptiTrack rigs as well as Kinects or Notch trackers.

I’m also working on motion-reactive particle systems that Amit based on distance and speed rather than time. I like the ideas of being able to trace movement without having an actual figure appearing on the screen, on the transient artifacts of motion.

Week 3 - cleanup, retargeting and export

This week we learned how to clean up motion capture data with Motive, use Maya to rig characters we design, retarget captured animations to our rigged models in Motion Builder and bring the joy back to Unreal.

The journey started at the MoCap studio, where Gabriel graciously performed Brazilian numbers, played a drunk and a Gorilla for us. We then set down to clean the data and dealt with unlabeled channels and missing chunks that needed completion. We had some good takes that were done in a matter of minutes, and others that were a total s%!tshow.

After the data was cleaned, we exported the animations and I started working on a character to rig. I used Oculus Quill to doodle a 3d character and worked off a humanoid model that I successfully rigged in Maya beforehand. I wanted to make sure the pose and proportions were the same so that I could use the same skeleton to rig a weirdly model figure that Maya might not be able to rig on its own.

After exporting the model from Quill I tested my theory in Maya and to my surprise re-skinning the rigged skeleton worked like a charm, that allowed me to transfer the skeleton as well as the skinning weights to my scribble figure and export it to Motion Builder.

 The rigged 3d doodle figure in Maya

The rigged 3d doodle figure in Maya

I moved the rigged 3d figure to Motion Builder and followed the steps to retarget the animation and apply it to the model. I had to re-align the animation clips by moving the curves for position and rotation since the clip didn’t start and origin, and Gabriel wasn’t facing forward but walking sideways.

Week 4 - MoCap Direction

The fourth week at the lab was all about direction, and we tried out a few dynamic scenarios to test out our direction chops in translating both environment and emotion in motion capture.

The first scene was trudging through a rushing river, and we tied Ridwan’s waist and ankles and pulled him back, simulating the push from the water. It was interesting to notice his movements and slowly understand what a body moving through water would look like, noticing details and adjusting direction instructions.

 Ridwan crossing a rushing river.

Ridwan crossing a rushing river.

Next we simulated a moon walk, I hunched behind Ridwan and he leaned on my back, then I began pushing up and he could float in the air. Terrick and Gabriel secured his feet and arms and we all slowly flated in zero-g.

 RIdwan floating in space

RIdwan floating in space

We then went on to an elaborate Spideman scene involving a moving chair, ladder skyscraper and a monster(me) battle. We did a few dry runs and play-by-plays that made it easy to direct as the scene was running, it was great to collaborate and work together to shape the scene.

Oct-03-2018 00-15-06.gif

The absolute highlight of the day was a surprise visit by Motion Capture child prodigy (and Gabriel’s daughter), Stella. We managed to rig her in a tiny suit and she was completely natural and happy to play with her digital double. We concluded with a dance from The Beauty and the Beast:

Oct-03-2018 00-21-06.gif

Week 4 afterthoughts

Directing motion capture is tricky, and requires great awareness of fine details that make up the performance and capture the essence of the meaning of emotion the actors are trying to convey.

We worked with props, supported each other and collaborated in shaping the scene and it was great to build our own language for honing in on direction as the labs go on.

Week 5 - Rigged Cameras and Hand Tracking

This week we experimented with setting up rigged cameras in Unreal, controlled by motion trackers in the studio. This let us walk around with tracked rigid bodies, explore different perspectives of the scene and play with camera movement. It reminded me of the “Making of” of Avatar, where I saw the film crew use special camera-mounted displays that rendered the 3d scene and actors overlaid on live green screen footage.

We moved on to using the Perception Neuron suit and tried to set Nico up in the suit. Full-body was limp but sorta-kinda worked, hand tracking didn’t work at all, something to do with the earth’s magnetic field, special space tin, neurons…………….

Still! I’m really excited to be able to blend together different capture animations, from full-body to hands and face, to get as much expression as possible in out animated scenes and as much data to work with.

After the MoCap session I got back to working on a scene I’ve been building over the past few weeks. I realized I could rig and retarget an animation all the way through in Maya using HumanIK (and on a Mac). While Maya is not as complete a motion package as MotionBuilder, it did the trick for simple rigging and retargeting of a skeleton animation.

Oct-09-2018 11-30-40.gif

I’ve been working on a scene where a character tries to escape one corner of a room only to have to keep running away from the next. During the actual MoCap session I chased Ridwan around the room shouting and waving my arms, and the sense of emergency really comes through in the capture. Now I wanted to use the environment, lighting and post-processing to complete the effect and convey fear and intimidation.

I set up 4 spotlights at each corner of the space in a blueprint with a colliders such that the 4 colliders covered the entire floor. I then set the spotlight to turn on when a socket attached to the skeleton’s head entered the space and aim at the socket.

Using exponential fog and volumetric lighting I was able to create a dramatic sense of military flood-lights lighting up and aiming at the person trying to escape. The scene was otherwise dark to accentuate the sense of panic and the fact that as soon as the figure escaped one light, another immediately spotted it, forcing it to keep running away.

Week 5 afterthoughts

I’m really excited about the affordances rigged cameras allow us as directors in animated MoCap environments, even more so than what’s possible with real cameras. I want to explore using rigged objects as controllers for depth of field, focal points, moving lights and other effects, things "live action” film crews can only dream of (or buy a bunch of expensive transparent drones that don’t really exist so this isn’t really possible anyway…).

I also want to keep working on developing the visual aspect of the scenes I’m working on, using the MoCap data as base material and using lights, scenery, camera angles and character representation to complete the story.

My favorite drawing tools (right now)

The pasts few days I’ve been reviewing all the drawing tools I use and tried out different ones. I found that I actually use a very wide range of tools for digital painting, drawing, modeling, and other forms of computer based general art-making.

These are my current favorites:

(straight up) Adobe Photoshop

Photoshop is great, and I’ve been using it for year and years, though mainly for photo editing and composition. I picked up a Wacom digital pen for the first time and tried to draw a quick sketch in photoshop. While my drawing skills leave a to be desired, I was impressed with Photoshop’s range of brushes, quality of pressure and articulation when using a digital pen and how easy it was to get a feel for expression and control. It was much better than tablet based systems I also tried this week like the Apple Pen and the Microsoft Surface Pen. I’m also very familiar with Photoshop’s photo manipulation and composition features, which will hopefully allow me to incorporate digital drawing into a larger digital art context.

 Butt ugly flower for reference, drawn with a Wacom Cintiq on Photoshop CC

Butt ugly flower for reference, drawn with a Wacom Cintiq on Photoshop CC

TouchDesigner & GLSL

Most of my recent generative visuals work has been in TouchDesigner using GLSL shaders. I love the visual node-based programming environment, how quickly it allows me to sketch ideas and how it all works in real-time with no render times and potential live control from audio and input signals. I use TouchDesigner for more abstract work and real-time composition but maybe it could complement assets made in Photoshop.

 Internal State (Barak Chamo, 2018). GLSL Shader in TouchDesigner.

Internal State (Barak Chamo, 2018). GLSL Shader in TouchDesigner.

Oculus Medium VR

For an unrelated Art Toy character design project I tried drawing and sculpting in VR using Oculus Medium. I found the spatial modeling tool to be surprisingly intuitive albeit very basic. I was able to sketch ideas quickly, modify characters and paint them by manipulating them in space using the Oculus controllers, it was an unparalleled easy modeling experience. I even had the chance to export the models for 3d printing, which was a breeze.

Even though not technically a drawing tool like Quil or TiltBrush, Medium felt less toyish and more practical in asset creation, a lot like a VR Z-Brush. I like it as a space to explore and mess around and potentially even export rough sketches that can be iterated on.

itpstudent_2018-09-14_20-52-10.png

VR Character design experiments

I’ve been experimenting with different approaches to character design, from more abstract form to kit bashing and procedural modeling to create different poses, convert different emotions and make characters that are highly stylized but are still relatable.

I used Oculus’ Medium, a VR sculpting tool, to create different forms and play around with different aspects of character design. In particular, I wanted to see if the key parameters I identified in previous weeks, namely scale and posture, are enough to create a relatable figure even before any features are painted and textures are applied.

I started with simple cubes and spheres and attempted to make a “cute” character, the result was “Block” and “Blob”, two characters made of cubes and spheres. What was interesting to me was how, even though the different parts are fixed, the figures seemed to convey different emotions when observed from different angles. “Blob” seemed both shocked and asking for a hug and “Block” wanted a hug as well, but also looked a bit down, depending on where you thought the head is pointing.

This reminded me of something I red in “Understanding Comics”, on how the more a character is stylized and simplified, the more it relates to a broad audience as they can see themselves in it. The trick, I suppose, is in conveying particular emotions with a limited degree of motion in the design.

Next I tried a few more modeling tools Medium had to offer, like stamps and clay-like modeling. Using stamps I created a “kit-bashed” character that somewhat reminds me of the “Bike Mice from Mars”, one of my favorite shows as a kid, or just a mecha-steam-punk-mickey-mouse. The second character was a sort of devil figure with his tongue out and eyes rolling, I tried Medium’s painting tools with this one too.

Medium has very handy export features so I was able to go from Medium to Cura and straight to the 3d printer in minutes. I tried different settings for printing and learned a lot about how to improve designs for a 3d-printing manufacturing pipeline. While Cura provides automated supports calculation for free-hanging parts, the filament still drips in places and the supports leave rough edges and require careful removal and cleaning. Still, pretty decent for a first attempt.

The final step was to give all the prints a coat of matte grey primer, this really brought out the features as lights and shadows became much more visible (I used clear filament which was very tricky to see through). Priming 3d-printed filament was tricky but with patience, a steady hand and several layers I think I managed a fairly clean coat. Unfortunately, the primer also brought out all the small imperfections of 3d printing.

All in all, I like this quick prototyping pipeline. I’m now working on refined designs in Houdini and Cinema4D and will try to print them on higher quality printers and give them some more finishing love.

Art Toy Concept & Turnaround

This week I explored different forms of art toys, and how figure and posture is used to imply mood and emotion.

I started by sketching familiar characters I like, like Marvin, the robot from “The Hitchhiker’s Guide to the Galaxy”, Danbo the cardboard boy, Uamou & Boo and others. All these characters share similar traits of styled and exaggerated physical features and postures that capture their mood and convey an emotion, even though the painted detail is very minimal.

I then started sketching my character. I knew I wanted to feature the oversized head also found in Dunnys and Munnys but in a posture that is not menacing, rather curious and melancholic - a lot like Marvin the robot, one of my favorite motion picture characters.

IMG_20180911_201441.jpg

Finally, I placed the character in a turnaround sheet. It’s very basic and mainly used to get a sense for the scale of the head compared to the small body and the tilt of the head.

With this character I’m planning to focus on the shape, phisique and posture as I think it’ll be a cool blank candidate.

IMG_20180911_201520.jpg

The Temporary Expert

Part 1 - Energy Field Guide

After reading through Steve Easterbook's Systems analysis of GMO protests in the UK, I decided to approach the system mapping of Caloric Energy from multiple perspectives, understanding the different points of view and boundaries that could be applied to this topic.


Caloric Energy - Systems

Caloric energy can be described in several different ways and, expanding on those descriptions and their perspective, several systems can be mapped:

1. A system of units, measurements and conversion of energy

Scientists across fields established defined units of measurement that quantify and describe anything from time, distance, scale and magnitude. Energy, for example, is measured in Joules and various kinds of energy and energetic substances can be compared and standardized. Caloric energy, in this context, is defined as the amount of energy needed to heat up water by one degree celsius.

The international system of units does not, however, exist in isolation. It is subject to the progress and politics of science, such as the standardization of the metric and imperial measurement systems that are mutually exclusive and incompatible. The process of measurement itself is also dependent on factors such as scientific progress and environmental conditions, the process of measuring calories involves heating water, which requires different amounts of energy depending on local climate and barometric pressure.

2. A system of food, nutrition, nutritional values and physiological energy

Different foods, depending on their nutritional content, contribute differently to a balanced diet, sustaining the human body in both energy and micro and macronutrients. The way in which a certain food's nutritional quality is measured is usually via it's nutritional density, the ratio of caloric energy to nutritional value, calories that come from foods with low nutritional value are considered empty calories.

The calorie intake of an average human varies by country and is used as a benchmark for the average amount of food we must eat daily. Commercially sold foods must be labeled for their caloric value as well as nutritional value. Only some nutrients are detailed in the label and these depend on the governing health and nutrition policy of the local government.

3. A system of marketing, food industry, capitalism, labor and food politics

The food industry, and health foods as well as fast or junk food in particular, are focused on marketing nutritional value, and the impression of such, in their food products. Terms such as "low calorie", "diet" and "zero calorie" are frequently used to persuade consumers of health benefits or positive nutritional balance in these products. Such terms are now regulated in many countries around the world, defining strict standards for what constitutes a "low calorie" food.

Caloric value also plays a role in the actual ingredients of such foods, as economies of scale made it so that "empty calories" or calories from fats and simple carbs are cheaper. The fact that not all calories are equally priced are at the core of a health and obesity epidemic that impacts low income households the most, as they are often unable to afford high nutritional value food and opt for "junk food" that has more marketing value than nutritional one.


Caloric energy is an interesting subject for systems analysis, as different perspectives and system boundaries relate to science, health, politics, business and marketing. The systems described above are only three top-level and distinct ones, though many crossing systems can probably be identified as well.


A Taxonomy of The Science and Business of Food

In creating a taxonomy of The Science and Business of food (measurement), I decided to set a particular system boundary and perspective that will allow me to focus my guide and direct further research. I want to shine a light at how misleading marketing, use of nutritional terminology and buzzwords, fuzzy food science and an overwhelming range of food quality indices work in favor of large food corporations and against the individual trying to maintain a balanced and healthy diet.

In exploring this topic and forming a taxonomy it became ever clearer that the number of nutritional variables that are “scientifically established” are far more than can be expected of a casual shopper to memorize and consider. At the same time, I was surprised to discover how many terms have become regulated by the FDA due to abuse, misinformation and blatant disregard for consumer health by food manufacturers, these includes “low fat”, “zero sugar”, “low calories” and even “artisanal” for crying out loud!


The Science and Business of Food - A Visual System

 An initial taxonomy of Food Measurement Science and Business

An initial taxonomy of Food Measurement Science and Business


Food Deserts, a Visual Study

In order to design a field guide for survival in “food swamps” and “food deserts”, I had to expand and focus my taxonomy of “Food Quality & Measurement” and incorporate aspects of the food business to build a more complete picture of the reality of food deserts. I’ve also begun to study visual references to food deserts, and how how they leverage visual analogies, metaphors and metonymies to illustrate and draw attention to many parts of the problem, from mapping food deserts as an epidemic, using cartoons to link food deserts to junk food conglomerates and provide visual aids to help navigate the knowledge area.

Maps are a very powerful way of illustrating to extent of a particular problem or the reach of an issue. When looking for maps of food deserts in the United States, I found that most adopted a light-to-dark brown sand-colored color scheme, reminiscent of desert colors. It was surprising to see how consistent this color scheme was between different publications and online references and made it clear that color is useful as a strong visual reference.

Many caricatures juxtaposed known images of junk food and desert scenes to drive the point that food deserts can be deceptively full of (unhealthy) food options, and that these can be seen as a mirage to mask the issue and fool those who cannot afford or an uneducated on healthy food and balanced nutrition.

What I loved about these illustrations is that they reclaimed branding and iconography, so dear to the hearts of corporate marketeers, to pose a poignant comparison between fast food chains, junk food and food deserts. In these illustrations, M doesn’t stand for “I’m loving it”, it stands for “You’re feeding me trash”.

Infographics are also heavily used to illustrate the extent of the nutritional crisis in the United States, and provide extended information and breakdowns in a digestible (no pun intended) visual format. While more abstract in reference, these infographics still draw direct comparisons between certain foods and brands (McDonald’s burger is starring here too) and the general issue, using it as an icon for food deserts as a whole.

Finally, I looked from imagery that is familiar to me from Bushwick, Brooklyn and other parts of the city. These images are in themselves a juxtaposition, depicting food stores and convenience stores but showing only low nutrient-density, processed food options. This is one of the trickiest problems in battling food deserts as food seems plentiful and easily available, yet it is not really nutritious - a mirage.


Survival Guides, a visual study

The second part of my visual prep study was into harsh environment survival guides, as I decided to position my guide as a “Food Desert & Swamp Survival Guide”. In particular, I was looking for examples for visual styles, focus areas in survival (navigation, classification, tool making, etc.) and the overall tone of the guide (directional, encouraging, dry, humouristic, etc.).

My first stop in the search was WikiHow’s desert survival guide, which provided a very comprehensive guide to identifying desert objects and fauna, how to pack, how to extract water and navigate challenging terrain. Funnily enough, the packing guide perfectly illustrates the issue at the core of food deserts.

The guide uses many visual techniques to illustrate instructions, from lists of objects, to embedded infographics, zoomed in step-by-steps and follow-the-arrow demonstrations. These are all great inspiration for the guide.

Next I looked for references for the visual style, and found old SAS, Navy and Air Ministry desert survival guides, they all seemed quite dated and ranged from a more lax to a stern and pragmatic tone. I also liked their vintage dated look.

What I learned from looking at these guides is that they are designed for on-site practical use that focuses on both immediate problem solving and long-term planning and preparation information. For example, knowing how to plan ahead for travel in different times of the say as well as how to extract water from plants at time of need.

This applies to my plan for the Food Desert Survival Guide as I want it to serve as a practical how-to at time of need (while shopping for instance) but also provide guidance towards transforming the community and building a more sustainable nutritional environment.


Building a visual style

Now it’s time to start crafting the visual style of my own guide. As I mentioned, I quite like the visual metaphor of a real desert used in infographics and caricatures, as well as the subversion of marketing and branding as warning signs - using the same icons and logos not to draw customers but to warn them. I thought about a few examples of where this visual juxtaposition and combination would be the most poignant.

I started with the field guide’s cover, and tried to pack in the visual references, style and tone that I wanted to follow in the rest of the guide. Here’s a very rough draft that uses illustrated desert scenery and combines it with the fonts and color schemes of American fast food chains:

 First draft of the field guide’s cover

First draft of the field guide’s cover


Drafting the guide’s outline

I sketched out a few key pages the guide will feature, with a focus on style and iconography and particular skills I think are important, such as understanding nutritional facts, posing food deserts as an epidemic in terms of mapping and visualization, introduction to long-term skills such as community gardening.