Week 4 – assignment

A thought on the reading 

The more I do something possibly called a data visualization, the more I realize every single step related to it – choosing, collecting, analyzing, omiting, highlighting, parsing, making an interaction, and even choosing a color must be a result of certain choice whether a designer intends to do it or just do it instinctively. I agree what the author said about the rhetoric power that “we accept charts as facts because they are generalized, scientific and seem to present an expert, neutral point of view.” If I choose to visualize a certain information in a form of one of standardized data visualization instead of writing it in a blog, it suddenly seems to secure objectivity as if it is printed in a scientific journal. I was first fascinated by its visual beauty, and later on, I realize its responsibility from my experience that people who saw my work hardly try to contest or verify it.

I found out it’s a much more hard work to empty a canvas with visualization than fill it. In order to not fill up, I need to read and understand a data much deeper. It’s a similar discipline with the first thought the author mentioned to some degree. Most of the time, I intentionally got rid of nulls or missing data to make it more consistent and clear on what I intended to show. Sometimes I feel like I just find a way to fit the data to complete a piece of work rather than tell the truth. As it’s mentioned if we think more about “how designers make decisions about representing uncertainty, including zeros, nulls, and blanks”, we will be less tempted into perfection in visualizing and get a little bit more close to integrity.

In terms of “making dissent possible”, I always wonder why there isn’t even a comment function, which is a primitive form of interaction in the web environment, in most of an interactive visualization work. This might be one of proof that an expert regards a work as flawless fact without realizing it. I believe we put much more effort to question not just the conclusion of a data visualization but also every single element of it, and it would be the true strength of so-called interactive visualization. For example, we could think of different usage of the tooltip, one of the most ignored functions in interactive visualization. What if we use a tooltip as an area for commenting by audiences about every element in the visualization as a mutual exchange of opinions? It would give an opportunity to build collective effort to understand better on the data and discover more perspectives never achieved by a single designer.

 

 

Final project plan 

The final project will be the part of my thesis project.

-Background of the project

I am always interested storytelling and a great source of inspiration have been moments of lives found in physical space. It was natural that photograph used to be my favorite medium because it allows me to capture a slice of reality in a surreal way, recreating a sort of poetic narrative with a rich context of a random moment at random place. But as I grew older and spent more and more times in digital space, I became more to take my surroundings for granted and to be a less sensitive person. So I have tried various experiments in digital space that eventually give us reasons to experience physical space more again.

When I came to ITP I was fascinated by data as a medium for storytelling because it seems another type of poetic representation about a slice of life. If taking a photograph is a type of micro way to see it, visualizing data might be macro way to do it. But the disadvantage of working with data is that it happens in digital space, so the more I do it, the more I am alienated from physical space. Even though a final visualized piece is often mapped into physical space such as installation, the preparation process for data for visualization(collecting, analyzing, contextualizing), which takes much more time, is conducted in digital space in most cases. So I wanted to find a way of how those process can more relate to or even take place in real places.

Over the course of research, old paintings in East Asia, works by Jenny Holzer, Ben Rubin, and Robert Montgomery inspired me to investigate how idea, thought, or emotion in a form of text projected on physical space enrich context of nature, public space, mundane surroundings and therefore make people engage more with them. The World of Reverie is an experimental approach of how participating in a processing data can augment relationship between human and place.

With a given corpus as raw data inspired by certain place, users can create a spatial poetry by juxtaposing a phrase from the corpus with a scenery with the capability of Augmented Reality. The first prototype is for myself and those who live in and travel to New York City. There are 18,163 phrases from 451 lyrics inspired by the city as raw data. While users walk around New York City with the app,  they can enjoy improvising a poetry about every corner or the street by selecting two keywords, which is a momentary response to a certain place, to bring a phrase. Choosing and situating a phrase users think it fits a moment and place is a process of processing their personalized data. While users enjoy their poetic creation, they simultaneously participate in collecting, analyzing, contextualizing the raw data to create their personalized dataset about the place they explored. The entire process itself becomes the way of experiencing physical space more and deeper by rediscovering endless context of them.

Throughout the project, I hope to get out of my limited world, experience more places, engage with them more deeply, and create a dataset about my conversation record with New York City as a form of spatial poetry. As next step, it could be expanded into other cities so that users can have their own poetic experience with their physical space.

-Final project: mapping the dataset that I created

As below I have been writing poetries situated in the places in New York City. Whenever I write a poetry, I save it as a datum including a verse of poetry(phrase), one or two keywords that I choose to bring a phrase, and information related to a place and a moment such as geo-coordinate, time, and weather.

I’m not sure yet which aspect of the dataset I will focus on, but things I consider are,

  • If I categorize them by location(e.g. Upper Westside), can I find how my recognition(or appreciation) varies from each area?
  • If I categorize them by time or weather(e.g. morning & rainy), can I find how the condition affects my recognition(or appreciation) of the city?
  • If I compound of verses and try to make it into one completed poetry, what would it look like? and how I can keep the geospatial value of it without using a map?

 

Week 3 – assignment

As this week assignment, I was trying to collect the context of a certain place where I am right now. One of the contexts would be what kinds of venues surround me. With Foursquare API, I can categorize venues into 908 types. I assigned each type of venue with a certain color, e.g. Aquarium as RGB (111,64,171). At certain places, some types of venues would be prevailing or it is completely mixture of diverse types.

Screen Shot 2018-04-09 at 11.52.49 PM

 

 

I developed a make a mobile web to visualize color mesh of your current location by types of venues. When you visit the website, first, you can see about 50 marked points near you on the map, and 3 seconds later it will turn to the color mesh. You can click each mesh to see the information of each venue.

day81

week3

 

 

 

 

 

 

 

 

 

 

 

 

You can play the demo with your mobile phone browser. (only with Chrome)

/ Source code

 

 

 

Week 2 – Assignment

For this week’s assignment, I decided to use my own dataset about location information that I have tracked over the past year with Openpaths. There are almost 1,000 datapoints including coordinate and timecode showing where I have been at a certain moment.

As I live a completely different life between weekday and weekend like any other daddy, I planned to visualize one year of my path in this city, assuming there might be a significant difference in the place where I have been according to day.

day79

 

I tried to merge Mapbox GL JS and D3.js as I consider a transition effect that changes the size of dots according to a selection between weekday and weekend.

Screen Shot 2018-04-02 at 5.15.59 PM Screen Shot 2018-04-02 at 5.16.09 PM Screen Shot 2018-04-02 at 5.16.18 PM

Even though there was less difference between weekend and weekday than expected, it’s certain that my radius of living is even smaller at weekend. I guess the location data of a person can imply which phase the person is in of his/her life – I definitely in the less active phase in terms of locomotivity. If I continue to gather this location data, I will be able to demonstrate the changes in my lifestyle by comparing them yearly basis.

Demo website

Source Code 

 

Week 1 – assignment

When I was asked “where is your home” at the first class, I hesitated to answer myself right back. I could come up with several places, in here New York and in my hometown Korea. And when I realized web map consists of tons of small tile image, I feel like it’s a kind of fraction of my geographical recognition of my homes. Becuase when I think of each home and explore in the web map, the way I find and mark it is different in how much I zoom in or how I set the range of it. So I planned to make a collection of pieces of places that I can call them as one of the homes, using raster tile images from web map.

day73

 

Users can click each of their homes while they are exploring the map. Once they click it, it will collect 3X3 tiles around the point. If users finish marking, they can click the ‘get my homes’ button, and collected tiles images will appear randomly. It’s an experiment to embody your geographically embed memory of your homes.

day73_74

 

Click to play the demo 

Source Code

 

Live web – final project idea : View

  • Background : Let’s look at an old chinese painting. A chinese poem is on the landscape painting. It was mainstream of art back then in east asia. Or look at this work done by Robert Mongomery. He built a statue of poem on actual space. One of my favorite topic for my photography is finding a text in random space in random moment and juxtaposing them to make some meaning out of them. I believe this kind of randomness has potential power to create new genre of literature in the mobile age.This might be little bit awkward question, but what would it look like if shakespeare do live broadcasting with his writing?
  • Idea : View is mobile platform that View of the world from writer with text will be on reader’s view through their mobile camera in real time so that the story will be able to merge into reader’s reality. One single phrase, but totally different feeling based on where readers are. A quick demo will show how it looks like when using AR camera. It allows text to be lied on the scene more naturally and seamlessly. Readers can change font to make different feeling and record the scene as photo or video. So the project could be a kind of experiment as an alternative way of writing and reading literature in this mobile age.

Live web – midterm idea & demo

  • Title :  live emotion map
  • Background : to understand my current and overall emotional state by making a simple web tool for data collecting & visualizing
  • Methodology (1) gathering data completely manually because emotion is only subjectively interpreted by me  (2) data input from mobile web page with multiple choices (3) whenever I input new data from my mobile, visualization of accumulated emotion map will appear in a website (4) anyone who wants to know about me better can visit this website

b6e2282c4ae1fc795942f534789f4db7

  • Step #1 for midterm : build mobile web page for data input and website for visualizing overall result in real time with socket.io

Screen Shot 2017-10-09 at 11.28.25 PM

  • step #2 for final : develop further it into VR room with changing light from emotion data like the carlos cruz diez’s work, chromosaturation.

Visitors wear protective coverings on their shoes as they walk through a light installation called "Chromosaturation" from 1965-2012 by Carlos Cruz-Diez at an exhibition entitled "Light Show" at the Hayward Gallery in London. (Suzanne Plunkett/Reuters)

  • Quick demo for mid-term : I made a simple program to represent idea of live mirror. Through web socket, input from button in mobile webpage goes to view of webcam of my laptop and makes a layer over the cam’s view, run by openframeworks. The color ratio of pixel on webcam’s view is changed according to my selection of colors.

Live Web / WebRTC – Magnifying glass

I made a demo using WebRTC and drawImage() of canvas function. When a user click ‘take photo button, it will capture part of webcam as image and paste expanded version of it into background.

getuserimage_demo_1 getuserimage_demo_2

Click to demo site

<html>

<head>
  <style>
    body,
    html {
      height: 100%;
      margin: 0;
    }

    #imagefile {
      background-position: center;
      background-repeat: no-repeat;
      background-size: cover;
      height: 100%;
      background-image: url('https://i.ytimg.com/vi/7ZDC8IfVTdQ/maxresdefault.jpg');
      z-index: 1;
    }


    #thecanvas {
      position: absolute;
      margin:0;
      top: 0;
      left: 0px;
      width: 160px;
      height: 90px;
      z-index: 2;
    }

    #thevideo {
      position: absolute;
      margin:0;
      top: 0;
      left: 0;
      width: 160px;
      height: 90px;
      z-index: 3;
    }

    #sendbutton {
      position: absolute;
      top: 70px;
      left: 0px;
      margin: 0px;
      width: 160px;
      height: 20px;
      border-radius: none;
      background: yellow;
      color: black;
      font-size: 0.5em;
      border: none;
      z-index: 4;
      opacity : 0.5;
    }
  </style>

  <script src="/socket.io/socket.io.js"></script>
  <script>
    var socket = io.connect();

    socket.on('connect',
      function() {
        console.log("Connected");
      }

    );

    socket.on('image',
      function(data) {
        console.log("Got image");
        // document.getElementById('imagefile').src = data;
        document.getElementById('imagefile').style.backgroundImage = "url('" + data + "')";
      }
    );


    var initWebRTC = function() {

      var hdConstraints = {
      video: {
        mandatory: {
          minWidth: 1280,
          minHeight: 720
        }
      }
    };

      // These help with cross-browser functionality
      window.URL = window.URL || window.webkitURL || window.mozURL || window.msURL;
      navigator.getUserMedia = navigator.getUserMedia || navigator.webkitGetUserMedia || navigator.mozGetUserMedia || navigator.msGetUserMedia; // The video element on the page to display the webcam
      var video = document.getElementById('thevideo'); // if we have the method
      if (navigator.getUserMedia) {
        navigator.getUserMedia(hdConstraints,
          function(stream) {
            video.src = window.URL.createObjectURL(stream) || stream;
            video.play();
          },
          function(error) {
            alert("Failure " + error.code);
          }
        );
      }

      var thecanvas = document.getElementById('thecanvas');
      var thecontext = thecanvas.getContext('2d');

      var draw = function() {
        // thecontext.drawImage(video, randomPositionX, randomPositionY, video.width / 4, video.height / 4, 0, 0, video.width, video.height);
        var randomPositionX = Math.floor(Math.random() * (1280 - 160));
        var randomPositionY = Math.floor(Math.random() * (720 - 90));
        console.log(randomPositionX, randomPositionY)
        thecontext.drawImage(video, randomPositionX, randomPositionX, 160, 90, 0, 0, video.width, video.height);
        var dataUrl = thecanvas.toDataURL('image/webp', 1);

        socket.emit('image', dataUrl);
      };

      document.getElementById('sendbutton').addEventListener('click', draw);
    }

    window.addEventListener('load', initWebRTC);
  </script>
</head>

<body>

  <div id="imagefile">
    <div id="vidoewrapper">
      <video id="thevideo" width="320px" height="240px" autoplay></video>
      <button id="sendbutton">Take Photo</button>
    </div>
    <canvas id="thecanvas"></canvas>
  </div>


</body>

</html>

 

 

Live Web / Socket.io – Lame Chat

As a practice for using socket.io, I made a lame chat deliberately, limiting the choice of words to use. Using twitter streaming API, the app will give you choice of around 200 words from recent twitter within NYC area. You can make a sentence by clicking buttons of words one by one. Still it’s in pretty much conceptual stage, but it might become intriguing app to play with if it’s further developed.

Screen Shot 2017-09-19 at 1.06.00 AM

//server.js

// Twitter part
var Twit = require('twit')

var T = new Twit({
  consumer_key: "ufDGsqlymEGNVLe1aws4H36D1",
  consumer_secret: "LO1Ue4ylzgLFx5AgKR0ABi1bLtBO9HKww5sxbkiwQjqRK5ubyY",
  access_token: "836034020173623296-kfxoDIqip2rhByuadNEgEEANWMOjCl8",
  access_token_secret: "nwhboLj5z6l62cRLdJY9ME3H2Wz7RO8YJRz9EyJdm1C4g",
  timeout_ms: 60 * 1000, // optional HTTP request timeout to apply to all requests.
})

var finalWords = [];

function getTweet() {
  // filter the public stream by the latitude/longitude bounded box of NYC
  var nyc = ['-74.2591', '40.4774', '-73.7002', '40.9162']
  var stream = T.stream('statuses/filter', {
    locations: nyc
  })

  stream.on('tweet', function(tweet) {
    var text = tweet.text;
    text = text.split('http')[0];
    text = text.split('RT')[0];
    text = text.split('@')[0];
    if (text !== "") {
      var words = text.split(' ');
      Array.prototype.push.apply(finalWords, words);
    }
    console.log(finalWords);
    console.log(finalWords.length);

    if (finalWords.length >= 200) {
      stream.stop()
      console.log('stopped');
    }
  })
}


// Chat server part

var app = require('express')();
var http = require('http').Server(app);
var io = require('socket.io')(http);

app.get('/', function(req, res) {
  res.sendFile(__dirname + '/index.html');
});

io.on('connection', function(socket) {
  getTweet();
  console.log("We have a new client: " + socket.id);

  // Send picked words to client
  io.emit('selectedWords', finalWords)

  socket.on('disconnect', function() {
    console.log('user disconnected');
  });

  // Share messages to all
  socket.on('chat message', function(data) {
    console.log("Received: 'chat message' " + data);
    io.emit('chat message', data);

  });
});


http.listen(3000, function() {
  console.log('listening on *:3000');
});
// index.html 


<!doctype html>
<html>

<head>
  <script src="/socket.io/socket.io.js"></script>
  <title>NYC Lame Chat</title>
  <style>
    * {
      margin: 0;
      padding: 0;
      box-sizing: border-box;
    }

    body {
      font: 13px Helvetica, Arial;
    }

    #sendButton {
      width: 100%;
      height: 20px;
      font-size: 13px;
      background: lightgray;
    }

    .wordButton {
      height: 20px;
      font-size: 13px;
    }

    #messageSection {
      width: 100%;
      height: 400px;
      background: black;
      color: white;
      word-wrap: break-word;
      overflow-wrap: break-word;
      font-size: 15px;
    }


    #buttonSection {
      text-align: center;
    }
  </style>
</head>

<body>
  <div id="messageSection"></div>
  <div id="inputSection">
    <div id="buttonSection">
    </div>
    <button id="sendButton" onclick="sendMessage()"> send </button>
  </div>

  <script>
    var socket = io();

    socket.on('connect', function() {
      console.log("Connected");
    });

    // Receive words from recent tweets in NYC
    socket.on('selectedWords', function(data) {
      console.log(data);
      for (var i = 0; i < data.length; i++) {
        var text = document.createTextNode(data[i]);
        if (text !== "") {
          var button = document.createElement("button");
          button.setAttribute("class", "wordButton")
          button.appendChild(text);
          button.addEventListener('click', addWords);
          document.getElementById("buttonSection").appendChild(button);
        }
      }
    });

    // Receive message
    socket.on('chat message', function(data) {
      console.log(data);
      document.getElementById('messageSection').innerHTML = document.getElementById('messageSection').innerHTML + " " + data;
    });

    // Put a word on server when choosing(clicking) it.
    function addWords() {
      console.log("chat message: " + this.innerHTML);
      socket.emit('chat message', this.innerHTML);
    }

    // Divide messages by adding line breaker. 
    function sendMessage() {
      console.log("send message");
      socket.emit('chat message', "< <br /><br />");
    }
  </script>

</body>

</html>

 

 

 

Live Web – an example of live web & self portrait

An example of live web – Your world of text

One of my favorite. It’s endless, chaotic, collaborative, random, useless and hollow like web itself. Your World of Text is an infinite grid of text editable by anyone. The changes made by other people appear on your screen as they happen. You can infinitely scroll through the world to see people’s scribbles. It’s a live gigantic empty sketchbook for everyone or just you. You can create your own URL to go to a new world starting off blank like http://yourworldoftext.com/liveweb. Strictly, it might not fall into category of live web, but one of the features forces me to place it in. Being raw without any kind of censorship prior to be disclosed is dangerous but meaningful thing about being live.

 

Self portrait – Something I like

With using html5 video tag and simple click interaction, I made a single page web of gifs collection from what I like. When clicking one of gifs, it will be highlighted with playing video while other are paused and faded out. Here is the link of demo.

self1

self2

<!DOCTYPE html>
<html>

<head>
<meta charset="utf-8">
<link rel="stylesheet" type="text/css" href="style.css">
<script src="https://ajax.googleapis.com/ajax/libs/jquery/2.2.0/jquery.min.js"></script>
<link href="https://fonts.googleapis.com/css?family=Raleway:900" rel="stylesheet">
<style>
body {
margin: 10px;
}

.wrapper {
width: 810px;
display: grid;
grid-gap: 3px;
grid-template-columns: 200px 200px 200px 200px;
background-color: #fff;
}

.snippet {
font-family: 'Raleway', sans-serif;
color: #fff;
font-size: 20px;
width: 150px;
height: auto;
padding-top: 10px;
padding-left: 10px;
position: fixed;
z-index: -1;
}

.box {
background-color: #fff;
border-radius: 0px;
margin: 0;
width: 200px;
height: 130px;
overflow: hidden;
}

.video {
margin: 0;
width: 135%;
transform: translateX(-10%) translateY(-10%);
height: auto;
}
</style>
<title>Self Portrait of Younho</title>
</head>

<body>
<div class="wrapper">
<div class="box">
<div class="snippet">Yeah I was a mad man</div>
<video class="video" src="https://media.giphy.com/media/39TWBQZ296G40/giphy.mp4" autoplay="" loop=""></video>
</div>
<div class="box">
<h1 class="snippet">Love Knicks as much as I hate it</h1>
<video class="video" src="https://media.giphy.com/media/ym3yVHEIgWEG4/giphy.mp4" autoplay="" loop="">
</div>
<div class="box">
<h1 class="snippet">I'm a part time nanny..or fulltime</h1>
<video class="video" src="https://media.giphy.com/media/13AXYJh2jDt2IE/giphy.mp4" autoplay="" loop="">
</div>
<div class="box">
<h1 class="snippet">You should try it!!!</h1>
<video class="video" src="https://media.giphy.com/media/3ofSB1BswqflLVt4ic/giphy.mp4" autoplay="" loop="">
</div>
<div class="box">
<h1 class="snippet">Yes I watched the whole season again</h1>
<video class="video" src="https://media.giphy.com/media/UHLtCLwRsbDFK/giphy.mp4" autoplay="" loop="">
</div>
<div class="box">
<h1 class="snippet">Black. Iced. Need it now</h1>
<video class="video" src="https://media.giphy.com/media/l41lRi0VWdnH90yJy/giphy.mp4" autoplay="" loop="">
</div>
<div class="box">
<h1 class="snippet">Kind of blue : all time favorite</h1>
<video class="video" src="https://media.giphy.com/media/u0SaOREpBwSTC/giphy.mp4" autoplay="" loop="">
</div>
<div class="box">
<h1 class="snippet">Time to go home...</h1>
<video class="video" src="https://media.giphy.com/media/3oriffVYPYhQbDRe3C/giphy.mp4" autoplay="" loop="">
</div>
<div class="box">
<h1 class="snippet">I had a dream...</h1>
<video class="video" src="https://media.giphy.com/media/3oz8xFHHldmilI2Was/giphy.mp4" autoplay="" loop="">
</div>
<div class="box">
<h1 class="snippet">Who doesn't???</h1>
<video class="video" src="https://media.giphy.com/media/CdqNOCOc8Lcw8/giphy.mp4" autoplay="" loop="">
</div>
<div class="box">
<h1 class="snippet">Shouldn't have started it...</h1>
<video class="video" src="https://media.giphy.com/media/vyPLDShqm3j5S/giphy.mp4" autoplay="" loop="">
</div>
<div class="box">
<h1 class="snippet">...of course I'm joking</h1>
<video class="video" src="https://media.giphy.com/media/26tn33aiTi1jkl6H6/giphy.mp4" autoplay="" loop="">
</div>
<div class="box">
<h1 class="snippet">No.1 destination</h1>
<video class="video" src="https://media.giphy.com/media/3o85xJWqnjH1Xu5rmE/giphy.mp4" autoplay="" loop="">
</div>
<div class="box">
<h1 class="snippet">lol</h1>
<video class="video" src="https://media.giphy.com/media/1mtKnWJVTpUKQ/giphy.mp4" autoplay="" loop="">
</div>
<div class="box">
<h1 class="snippet">It's not pink...</h1>
<video class="video" src="https://media.giphy.com/media/k0H9IeP5g4O52/giphy.mp4" autoplay="" loop="">
</div>
<div class="box">
<h1 class="snippet">Don't overcook please...</h1>
<video class="video" src="https://media.giphy.com/media/xUA7aU4305QHkswlji/giphy.mp4" autoplay="" loop="">
</div>
</div>
<script type="text/javascript">
$('.video').each(function(e) {
$(this).on("click", pause);
$(this).on("mouseout", play);
});

var r, g, b;

function pause(e) {
randomNumber();
this.parentNode.style.backgroundColor = 'rgb(' + r + ',' + g + ',' + b + ')';
this.style.opacity = 0.4;
$('.video').not(this).each(function() { $(this).get(0).pause(); });
$('.video').not(this).each(function() { $(this).get(0).style.opacity = 0.15; });
$('.video').not(this).each(function() { $(this).get(0).parentNode.style.backgroundColor = 'white'; });
this.parentNode.children[0].style.zIndex = 2;
}

function play(e) {
this.play();
this.parentNode.style.backgroundColor = '#fff';
this.style.opacity = 1;
$('.video').each(function() { $(this).get(0).play();});
$('.video').not(this).each(function() { $(this).get(0).style.opacity = 1; });
$('.video').not(this).each(function() { $(this).get(0).parentNode.style.backgroundColor = '#fff';});
this.parentNode.children[0].style.zIndex = -1;
}

function randomNumber() {
r = Math.floor(Math.random() * 255);
g = Math.floor(Math.random() * 255);
b = Math.floor(Math.random() * 255);
}
</script>
</body>

</html>

 

 

[Data Art] Lost in Translation

As the final project for the class, Data Art, I and Ellen are teamed up and were discussing about the project through the mobile messenger. While we were having conversation in Korean, we found out we were using lots of ‘ㅋ’ as most Korean does.

Screen Shot 2017-05-06 at 11.35.57 AM

Naturally the discussion went to emoji, another kind of abstract, compact literation widely used in mobile. We are using it unconsciously everyday, but we might not perfectly understand the meaning of it, even though it is supposed to mean certain thing. We thought there’s complicated emotion behind these kinds of literation in mobile, leading ‘lost in translation’. We decided to analyze the ‘ㅋ’ as a prototype project, collecting and representing data of emotion behind it.

Screen Shot 2017-05-06 at 11.36.24 AM

‘ㅋ’ is originally literation of sound of laughter, something similar with ‘kekeke’ in English. But people use it far beyond just laughter, each person using it differently. There’s even joke about different meaning according to the number of it.

Screen Shot 2017-05-06 at 11.36.12 AM

Screen Shot 2017-05-06 at 11.36.19 AM

We wanted to find undertone of this letter, so we did a self-analysis to see emotion behind this letter by breaking down into several emotion from joy to fear. The criteria we picked is a standard in computer based emotion analysis. But it’s done by us, human being. The process of this human analysis was interesting to both of us, as we felt there’s misinterpretation when we did the each of own project with the computerized sentimental analysis.

Screen Shot 2017-05-06 at 12.09.54 PM

We also did a mutual analysis. Each letter was analyzed by myself first, and then also by opponent again so that we could figure out difference between intention of sender and interpretation of receiver.

Screen Shot 2017-05-06 at 11.40.23 AM

We set rules, guidelines of visualization, how to represent the emotion. The amount of each emotion decides each typographical feature of it, which are size, width, weight, and shear. Base on the result of analysis, each letter had it’s own unique form. In doing so, we wanted to embed complicated emotion in the letter itself.

Screen Shot 2017-05-06 at 11.40.27 AM

Below is the summary of the result. Every letter has distinctive form, different from each other. We found several interesting take away from the result. 

Screen Shot 2017-05-06 at 11.40.33 AM

First, as we guess before we start, there’s indeed lost in translation. There is huge gap between the intention and the interpretation. Secondly, it actually reflects the personality of one who uses it. Almost every form of my letter is quite similar no matter what the context of the phrase is. We can assume each person use this letter with similar emotion regardless of the context. We assured this little theory once again when we saw the result in a cross way. For example, my analysis of my own word is similar with my interpretation on Ellen’s word, meaning that I used this letter and also recognized this letter in my own way with certain emotion. The similarity of form didn’t depends on the context but rather on the person. So our conclusion of this weird experiment is that this letter contains quite diverse emotion behind it, and it’s mainly affected by each one’s personality.

Screen Shot 2017-05-06 at 11.40.37 AM Screen Shot 2017-05-06 at 11.40.41 AM Screen Shot 2017-05-06 at 11.40.47 AM

We thought it makes sense if it is made into some kind of accessories as it strongly show the personality of one who use it. We prototyped it by doing a laser cut to make elements of design of any kind of thing to represent the personality. This small prototype experiment between two of us would be extended to large number of people, so more meaningful or interesting results might be unfolded in the next step. 

Screen Shot 2017-05-06 at 11.40.53 AM

[AOAC Final] Walk by Walk

For this semester, I have been exploring something about location, as I believe the aspect related to location is one of the most important thing about mobile.

Background #1

The project was initiated by thinking about myself – what I need most these day. I definitely need some exercise! After coming ITP, (or even before..) I don’t exercise enough. I should walk more and need some stimulus to do more. But I don’t like treadmill and prefer to walk outside. Though I took fitbit all the time as a stimulus, it could be ignored and no one blames about it. Plus walking alone is boring.

Background #2

I’ve been deeply into a mobile game, Clash Royal, whose genre is a realtime 1:1 battle. The reason why I love this? I could completely focus on it for 3 minute without distracting due to the aspect of competition. The fact that I’m able to quickly and casually meet and share the such short but intense moment with someone else completely unknown in some remote area is only possible with mobile. 

Idea 

Walk by Walk is a realtime 1:1 walking game. It is a competition as trigger to walk(exercise) more. It will enable you to connect to someone wherever they are. It will allow us to share experience of walking and spatial memory. To be more immersive about sharing the experience and feeling data more lively, I take advantage of Google Street View API to harness substantial amount of image data from it. Two people can share spatial memory through the visual appreciation of the places from Street View.

final_presentation.001

final_presentation.008

 

How it works

Two different mobile phone’s data about location while they are walking will be transferred through realtime database (I used Firebase).

final_presentation.003 final_presentation.004 final_presentation.005 final_presentation.006 final_presentation.007

5th assignment : Ongoing project & Story Space

Ongoing project / 

I keep working on the project making storytelling with google street view. I was inspired by 9-eyes project, so I looked around some familiar place such as a place near my first job or my first date with my wife.I already had specific memory on the places so I can recall it, and put some fictional story on the scene. I tired to make a conversation when there are people in the scene, using text like a speech bubble in comic books.

4aeb12b341eb21566f7b78430a84797e

(the street where I did my first date)

Screen Shot 2017-04-26 at 11.01.46 PM

(the back street of the company I worked)

Screen Shot 2017-04-26 at 11.02.58 PM

I would make series of scenes having conversation on it, weaving the context from my memory and the circumstance in Street View.

 

Story space / 

There are two searchable elements in my project, which are the conversation and the place. When I create a new scene, a short snippet explaning the scene will be generated in the search column. For example,

— “Are you kidding me?” @ Jung-dong, Seoul 

Users are able to choose each story based on what kinds of conversations people on the scene have and where they are.

4th assignment : interaction

/Update about background/

I’m trying to use Google Street View API as a background. Like this project, there’s limitless options for background if I am able to integrated my 3D characters into the Street View. Same action, location, or relationship between characters could expand into countless storytelling in different location in the world.

 

/Interaction/ 

I did 2 tests for the interaction. Check here test site for demo.

# location : user’s current location or searching some place would be the interaction element that decides the background. It could be my daily places or completely unknown and imaginative places. (reference code for google street view panorama code)

# moving the foreground element with mouse : As I couldn’t use Kinectron for this week, I tested with mouse to move foreground elements. Since I haven’t decided the foreground elements, I just used one of the three.js object. As mouse is moved, the object follows.

Screen Shot 2017-04-19 at 11.17.24 PM Screen Shot 2017-04-19 at 11.22.38 PM Screen Shot 2017-04-19 at 11.18.58 PM

3rd assignment : Creating 3D objects with Fuse

For this week’s assignment, I tried to create foreground objects using Fuse. It’s not hard to build the object as fuse’s UI is quite straightforward. I made bunch of characters with different animations and exported them to ‘.dae’ file as to include them into three.js background.

Screen Shot 2017-04-12 at 6.11.56 PM Screen Shot 2017-04-12 at 6.29.19 PM

First problem happened when using the Collada example. It was worked when using only the character, but didn’t work if I imported it including animations with the error message “couldn’t find joint mixamo rig…”. I’ve explored google for a while, but couldn’t figure it out how to fix it. I guess ColladaLoader.js isn’t perfectly compatible with Mixamo’s ‘.dae’ exporting.

Screen Shot 2017-04-12 at 6.28.29 PM Screen Shot 2017-04-12 at 6.27.15 PM

Secondly, it’s little bit tricky to combine the foreground object and the background 360 image in a single js and html. I could grasp the way it works for each of it, but had a difficult time to set(init) two things at the same time, as I’m still not used to three.js.

I’m still trying to get used to technical part of each elements, and yet decided what story I would design. I’m kind of fascinated by the fact that how easily I could create delicate 3D characters in Fuse, so I’m thinking about creating several distinctive characters using Fuse, giving users different options about the background to make variations of storytelling.