My final ICM project – the emoji translator and mixer

For my final project, I made an emoji translator and mixer. The program converts user input into emojis instead of the user having to browse through tabs in order to find the right icon. It also lets the user create one animated icon to combine two emotions. Resources:

  • I used this list of 5000 words as a resource for the database to match words to icons.
  • After failing to use the Apple Color Emoji as a font, I got the icons as png files  from and used that instead.

Libraries in use

  • Control P5 library for UI elements
  • gifAnimation library for gif animation
  • java.awt.Frame; to get a second  frame

video documentation from on .     The code for translating user input into emojis in a 2nd window: import controlP5.*; import java.awt.Frame; import java.awt.BorderLayout; PFrame PFrame; secondApplet s; ControlP5 cp5; Table dataset; PImage img; PImage img2; float a; PFont f; PFont sf; public StringDict emojis; public String translated = “_______”; public String translated2 = “_______”; public String date = day() + “_” + hour() + “_” + minute(); void setup() { size(420, 370); frameRate(12); imageMode(CENTER); dataset = loadTable(“words_emoji.csv”, “header”); emojis = dataset.getStringDict(“WORD”, “EMOJI”); cp5 = new ControlP5(this); PFrame PFrame = new PFrame(); s = new secondApplet(); f = createFont(“Arial”, 16, true); sf = createFont(“Arial”, 10, true); ///textfield 1 cp5.addTextfield(“1”) .setPosition(20, 100) .setSize(190, 40) .setFont(createFont(“arial”, 20)) .setFocus(true); ///textfield 2 cp5.addTextfield(“2”) .setPosition(20, 180) .setSize(190, 40) .setFont(createFont(“arial”, 20)) .setAutoClear(true); ///clear button cp5.addBang(“clear”) .setPosition(20, 320) .setSize(40, 20) .getCaptionLabel().align(ControlP5.CENTER, ControlP5.CENTER); cp5.addSlider(“MIX”) .setId(1) .setPosition(20, 270) .setWidth(190) .setRange(2, 6) .setValue(3) .setNumberOfTickMarks(6) .setSliderMode(Slider.FIX); } void draw() { int indent = 20; textFont(f); fill(255); background(0); text(“Type two words.”, indent, 40); text(“Press return after each word.”, indent, 65); textFont(sf); text(translated, indent+13, 152); textFont(sf); text(translated2, indent+13, 232); } public void clear() { cp5.get(Textfield.class, “1”).clear(); cp5.get(Textfield.class, “2”).clear(); } void controlEvent(ControlEvent theEvent) { switch(theEvent.controller().id()){ case(1): a = (float)(theEvent.controller().value()); println(“a slider event A.”); break; } if (theEvent.isAssignableFrom(Textfield.class)) { if (theEvent.getName().equals(“1”)) { translated = theEvent.getStringValue(); } if (theEvent.getName().equals(“2”)) { translated2 = theEvent.getStringValue(); } } }   public class PFrame extends Frame { public PFrame() { setBounds(700, 320, 200, 200); s = new secondApplet(); add(s); s.init(); show(); } } import gifAnimation.*; public class secondApplet extends PApplet{ GifMaker gifExport; String path = “/Users/zivschnieder/Documents/Processing/two_frames/data/”; public void setup() { imageMode(CENTER); gifExport = new GifMaker(this, “emo”+date+”.gif”, 256); } void draw(){ background(0); images(); animateGif(); } public void images(){ float x = 0; float y = 0; float b = random(2, 8); if (emojis.hasKey(translated)) { String file = emojis.get(translated); PImage img = loadImage(path + file); img.loadPixels(); int dimension = img.width * img.height; for (int i = 0; iCode for the mix and gif animation from two images: import gifAnimation.*; import controlP5.*; GifMaker gifExport; PImage img; PImage img2; PFont sf; ControlP5 cp5; public float a = 5; public float b = (random(3,6)); void setup() { String date = day() + “_” + hour() + “_” + minute(); println(date); gifExport = new GifMaker(this, “emo”+date+”.gif”); size(220, 220, P3D); frameRate(12); imageMode(CENTER); sf = createFont(“Arial”,8, true); cp5 = new ControlP5(this); cp5.addSlider(“MIX IT!”) .setId(1) .setPosition(20, 180) .setWidth(140) .setRange(2,6) .setValue(2) .setNumberOfTickMarks(6) .setSliderMode(Slider.FIX); } void draw() { background(0); float x = width/2; float y = height/2-20; float b = random(2, 8); loadPixels(); PImage img = loadImage(“emo419.png”); int dimension = img.width * img.height; for (int i = 0; i



This is a sketch for an hourglass geometric movement that I am working on for physical computing.
It’s counting a minute with millis and there are two rectangles located behing a transparent png shape.
The rectangles are in corners mode and their Y is attached to millis and moving down or up with time.

I had a problem getting the bottom one to appear, but after seeing Shiffman for help, realised that it might be because of my screen size vs. the sketch size.

response to ‘Out in Public’

Prior to reading this interview with Natalie Bookchin, I was not familiar with her work,
although it’s right up my alley and really suited the state of mind I am in right now.
I found some of her projects very powerful and engaging, and I tend to be drawn to that type of obsessive anthropological documentary exploring the times we live in.

What I liked most of all and couldn’t put in words until she did it herself, was that unlike other artists who visualise data gathered from the internet, she brings out the individual story. In her works, the individual is not another abstract dot on a map, but has a voice. This voice is heard within a choir, but it is heard.


Final project progress


For my final project, I am making an emoji translator/mixer. One thing I find very annoying about using emojis, is that instead of them saving a little typing time, I waste a quarter of my day looking for the right one because there are so many of them and they are not organised clearly. I wish there was a way to simply type a word and have the emoji appear instead! I have collected a list of 5,000 words that are the most common ones in the english language, and I am matching them to emojis. Since there are only 450 emojis at the moment, images will match more than one word. Also, sometimes one emoji does not exactly match the way you are feeling at that moment. Which is why we get lines of icons to express one state. What if there were mixed emojis that we could make on our own? The second part of the project will be a mixer. The user will type in two words. Two images will appear, woven into each other. The user could then create an animated gif from the two. Iv’e already made some gifs, with random combinations of words, as you can see here:


  I was hoping to use the Apple Color Emoji font, but after encoding issues chose to use png images instead. For the UI elements, I used the control P5 library and for the animation I used gifAnimation library. Right now, I have two separate parts of the project working, with some issues to resolve.

  1. Connecting the parts of the project: right now, I separated the animation because it didn’t work well with the translator. Right now, with the translator, I have one string and can’t control the images separately so that they would have different gaps within the pixels.
  2. The text fields need to create separate strings. Right now they are doing something weird together.
  3. Limiting each text field to one word.
  4. Moving from one field to the next with tab/space.
  5. Sending the animation into a new window – Iv’e started using two windows with some java library import, but still need to learn how to control it.
  6. Connecting to the web. right now I am manually uploading the gifs to tumblr.
  7. Getting png files for emoji2, which has some of the best ones.
  8. Designing a nicer interface.
  9. Saving each gif as a new one, right now they are overriding each other.
  10. Translating 5000 words to emojis!

This is how the translator looks right now:     Screen Shot 2013-11-16 at 9.24.30 PM

Pcomp final – week 2

Project sketch

diffusing light masking lightGood news, I am now working with a partner!
Alina joined me and we are working together on the hourglass.

We’re making a virtual geometric hourglass that will be like an abstract interactive art piece. The user can hang on the wall and count down. When turned upside down, the count will restart.

Project sketch


The hourglass will consist of 16 rows of LEDs in a 16″ canvas frame. We made a  timeline, a bill of materials and already ordered the Neo Pixel LEDs. Right now we want to get the prototype working right before we get the lights. Also, we’ve been testing the plexi and canvas to see what visualeffect we get.


Bill of materials

max budget – $200

2 tilt sensors / accelerometer

180 LEDs
~$20 Plexi – 24 – frosted

~ $ 30 Canvas
~ $10 Wood


Nov 12
ordering parts, meeting with Benedetta

Nov 13
Laser cutting and prototype making

Nov 14
Class, regroup and discussion
user testing?

Nov 15

Friday morning, buying wood and plastic


Nov 19
Tuesday ~3 or 4 PM
start arranging

Nov 20
continue to work

Nov 21
class, show progress

Nov 22 – 28

Nov 28th
fully working, time left for debugging and testing, revisions etc.

Dec 5
final presentation

Reject Salon – my submission for the winter show postcard



Final project proposal

Even though I was excited about the pulse sensor and initially wanted it to be my final project too, I feel I should experiment with other projects and maybe go back to that in the future.

For My final project this semester, I would like to make an hourglass.
Time is a subject that occupies my mind a lot and I have been wanting to make some form of clock for a while.

There will be three hourglasses and they will count a day, an hour and a minute. When turning the screen/frame, the counting will reset. This hourglass is an abstraction of a physical one.
It will include flat geometric shapes only.

The technical aspect is not yet resolved. For now, I made a processing sketch that you can see below and I am thinking of making it projected, although I am open to any ideas of a physical substance that would physically emulate the movement of an sand in an hourglass.

The back side of the frame/screen will have a circuit with either a switch or a potentiometer type sensor and when turning it, the sketch/projection/movement will start over.

The cost of this project has a wide possible range, depending on the materials. As a decorative artwork, I don’t think that the materials should be necessarily the cheapest available, but I also don’t want it to be over priced.




Digital input and LCD print

Going over some of the first labs, partially as a setup for my final project.
I tried the “LCD crystal ball” from the arduino book, since my project is going to include a screen and perhaps a tilt sensor.

There is something wrong with the screen because it’s not printing anything but blank characters, but the contrast with the potentiometer works and the tilt sensor is also working, as you can see in the video. The sensor is giving a digital 0/1, that I think will be enough for my hourglass to tell if it has been rotated 180 degrees.

from on .

Halloween Photobooth

Outside of class, I worked with a group on this year’s Halloween party photo booth. This was a big learning experience for me, with a very tight deadline. Most of the technical work was done by Alexandra from 2nd year, and I learned a lot just by watching her work, planning the project, troubleshooting and improvising.
It was the first time that I was part of a physical project that uses technology and is used by a large group of people and it was an interesting lesson, and that is why I decided to blog about it.

The initial idea was to make a “PhotoBoo!” . The user would walk into a very dark space and an extremely bright light would hit them in the face. The photo would be taken with a slight delay, when the person is frightened/shocked/angry.

A few questions we had to answer along the way:
– What will the user hear or see while he is being frightened?
– If there is a delay, how will the camera see anything?
-Should we use a regular webcam or a 5D?
– How do we trigger a flash?
– Or maybe  we should trigger clip lights using a power switch tail?
– How do we avoid getting too many photos with the sensor triggered by human presence?
– How do we build the dark booth?
– How do we keep drunk people from tripping over when they walk in the dark?

The whole project was composed by Alexandra using Max. All the different parts were connected by the program – Arduino, the camera, the dropbox folder with the photos.
We ended up using a 5D camera with This  part (strangely called female hot shoe) to trigger the flash.
The flash didn’t work perfectly and until the last minute we weren’t really sure if it was going to work.
The Booth was built in room 50, we used dark curtains and created a fairly big space that can contain a group of people.  The way to the booth was paved by whiteboards and the entrance to the booth was facing the sensor so that we won’t get side shots of people. We used a projector and a disco ball to get  in the space.

We were troubleshooting until after the last minute when people were already there and didn’t even document properly but still I think we can take pride in what was achieved within ~3 days.

The photos (frames were a last minute improvisation that I deeply regret):

from on .

The code for the flash (courtesy of Laura)

#define BUTTOM_PIN 2
void setup()
  digitalWrite(CAMERA_FLASH_PIN, LOW);
  Serial.begin(9600); // open serial
  Serial.println(“Press the spacebar to trigger the flash”);
void loop()
  if (digitalRead(BUTTOM_PIN) == HIGH) {
    digitalWrite(CAMERA_FLASH_PIN, HIGH);
    digitalWrite(CAMERA_FLASH_PIN, LOW);
  else {
    digitalWrite(CAMERA_FLASH_PIN, LOW);


Final Project – Laws of Power

This is our final project for this short yet intense 7 weeks class.  The team included David Tracy, Meg Studer and myself. We chose to make a “How to” video series about gaining power. We wanted to use some type of self help sociopath  book and chose “The Laws of Power” by Robet Green. This book is very popular in the Hip Hop community. It includes 48 laws that would help you gain power and we divided these laws into 3 groups.

1. Controlling your own urges – Shown with a breakfast recipe for pancakes

2. Controlling your appearance – Shown with a lunch recipe for a sandwich

3. Controlling others – Shown with a dinner steak recipe

from on .

Our group presentation

In response to Dennis Crowley’s talk, we chose to deal with the subject of privacy in our age. We didn’t necessarily want to protest against big brother tapping into our personal data, as much as we wanted to raise awareness and start a discussion.

We found an interesting reference point in the dialogue from David Lynch’s ‘Lost Highway’, when the person who is at once present at the house and at the party says to Bill Pullman’s character that he invited him, and it is not his custom to go where he is not invited. In a way, it resembles the way we invite progress to our doorstep without necessarily taking into consideration all the downsides and the outcome.

from href=”http://”>coloringchaos on .

Group members
Arielle Hein
T.K Broedrick
Bing Huan
Evan Wu

Heartalarm – pcomp midterm


The initial idea for this project came from living in a bad part of Bushwick that I was scared to walk through at night. I had the idea of making an self defence device/alarm that responds to your level of stress and anxiety. Looking for the right data to trigger this alarm, I arrived at the pulse sensor. I decided to make a basic alarm that will be triggered when the pulse goes above a certain level. The long term plan was to have the small and wearable device connect to wi-fi and report any irregularities online to your spouse/parent/person of your choice. Feedback I got from the class while presenting my idea:

  • There should be another form of input to trigger the alarm so that it doesn’t go off for the wrong reason. I should look into combining more sensors like distance and light so that: if it is dark + someone is getting closer to you + your heart rate is very high > then the alarm will go off.
  • There should be some form of approval from the user that there is indeed a need for help.


  • Failure to work with a piezzo – The code that came with the sensor was using interrupt and there was some overlapping with the tone library that couldn’t allow the piezzo and the sensor to work at once. Trying to troubleshoot this until the last minute, I ended up using processing for sound.
  • The reading was not entirely precise – I should’ve worked further to improve the code and normalise the values of the output.
  • Not putting enough work into the interface and user experience – being too absorbed in getting the basic piezzo and sensor to work, I didn’t pay enough attention to the bigger picture, which was the main challenge of this exersize.

Lessons (Hopefully) Learned

  • Plan ahead
  • Get a plan B, and C
  • Think big and try more solutions, experiment.
  • More focus on user interface.

Some Pride

  • Getting the project significantly smaller using a shield which made it pocket sized.
  • Learning how to solder.
  • Connecting Arduino and Processing.



from on .