Live web – final project idea : View

  • Background : Let’s look at an old chinese painting. A chinese poem is on the landscape painting. It was mainstream of art back then in east asia. Or look at this work done by Robert Mongomery. He built a statue of poem on actual space. One of my favorite topic for my photography is finding a text in random space in random moment and juxtaposing them to make some meaning out of them. I believe this kind of randomness has potential power to create new genre of literature in the mobile age.This might be little bit awkward question, but what would it look like if shakespeare do live broadcasting with his writing?
  • Idea : View is mobile platform that View of the world from writer with text will be on reader’s view through their mobile camera in real time so that the story will be able to merge into reader’s reality. One single phrase, but totally different feeling based on where readers are. A quick demo will show how it looks like when using AR camera. It allows text to be lied on the scene more naturally and seamlessly. Readers can change font to make different feeling and record the scene as photo or video. So the project could be a kind of experiment as an alternative way of writing and reading literature in this mobile age.

Live web – midterm idea & demo

  • Title :  live emotion map
  • Background : to understand my current and overall emotional state by making a simple web tool for data collecting & visualizing
  • Methodology (1) gathering data completely manually because emotion is only subjectively interpreted by me  (2) data input from mobile web page with multiple choices (3) whenever I input new data from my mobile, visualization of accumulated emotion map will appear in a website (4) anyone who wants to know about me better can visit this website

b6e2282c4ae1fc795942f534789f4db7

  • Step #1 for midterm : build mobile web page for data input and website for visualizing overall result in real time with socket.io

Screen Shot 2017-10-09 at 11.28.25 PM

  • step #2 for final : develop further it into VR room with changing light from emotion data like the carlos cruz diez’s work, chromosaturation.

Visitors wear protective coverings on their shoes as they walk through a light installation called "Chromosaturation" from 1965-2012 by Carlos Cruz-Diez at an exhibition entitled "Light Show" at the Hayward Gallery in London. (Suzanne Plunkett/Reuters)

  • Quick demo for mid-term : I made a simple program to represent idea of live mirror. Through web socket, input from button in mobile webpage goes to view of webcam of my laptop and makes a layer over the cam’s view, run by openframeworks. The color ratio of pixel on webcam’s view is changed according to my selection of colors.

Live Web / WebRTC – Magnifying glass

I made a demo using WebRTC and drawImage() of canvas function. When a user click ‘take photo button, it will capture part of webcam as image and paste expanded version of it into background.

getuserimage_demo_1 getuserimage_demo_2

Click to demo site

<html>

<head>
  <style>
    body,
    html {
      height: 100%;
      margin: 0;
    }

    #imagefile {
      background-position: center;
      background-repeat: no-repeat;
      background-size: cover;
      height: 100%;
      background-image: url('https://i.ytimg.com/vi/7ZDC8IfVTdQ/maxresdefault.jpg');
      z-index: 1;
    }


    #thecanvas {
      position: absolute;
      margin:0;
      top: 0;
      left: 0px;
      width: 160px;
      height: 90px;
      z-index: 2;
    }

    #thevideo {
      position: absolute;
      margin:0;
      top: 0;
      left: 0;
      width: 160px;
      height: 90px;
      z-index: 3;
    }

    #sendbutton {
      position: absolute;
      top: 70px;
      left: 0px;
      margin: 0px;
      width: 160px;
      height: 20px;
      border-radius: none;
      background: yellow;
      color: black;
      font-size: 0.5em;
      border: none;
      z-index: 4;
      opacity : 0.5;
    }
  </style>

  <script src="/socket.io/socket.io.js"></script>
  <script>
    var socket = io.connect();

    socket.on('connect',
      function() {
        console.log("Connected");
      }

    );

    socket.on('image',
      function(data) {
        console.log("Got image");
        // document.getElementById('imagefile').src = data;
        document.getElementById('imagefile').style.backgroundImage = "url('" + data + "')";
      }
    );


    var initWebRTC = function() {

      var hdConstraints = {
      video: {
        mandatory: {
          minWidth: 1280,
          minHeight: 720
        }
      }
    };

      // These help with cross-browser functionality
      window.URL = window.URL || window.webkitURL || window.mozURL || window.msURL;
      navigator.getUserMedia = navigator.getUserMedia || navigator.webkitGetUserMedia || navigator.mozGetUserMedia || navigator.msGetUserMedia; // The video element on the page to display the webcam
      var video = document.getElementById('thevideo'); // if we have the method
      if (navigator.getUserMedia) {
        navigator.getUserMedia(hdConstraints,
          function(stream) {
            video.src = window.URL.createObjectURL(stream) || stream;
            video.play();
          },
          function(error) {
            alert("Failure " + error.code);
          }
        );
      }

      var thecanvas = document.getElementById('thecanvas');
      var thecontext = thecanvas.getContext('2d');

      var draw = function() {
        // thecontext.drawImage(video, randomPositionX, randomPositionY, video.width / 4, video.height / 4, 0, 0, video.width, video.height);
        var randomPositionX = Math.floor(Math.random() * (1280 - 160));
        var randomPositionY = Math.floor(Math.random() * (720 - 90));
        console.log(randomPositionX, randomPositionY)
        thecontext.drawImage(video, randomPositionX, randomPositionX, 160, 90, 0, 0, video.width, video.height);
        var dataUrl = thecanvas.toDataURL('image/webp', 1);

        socket.emit('image', dataUrl);
      };

      document.getElementById('sendbutton').addEventListener('click', draw);
    }

    window.addEventListener('load', initWebRTC);
  </script>
</head>

<body>

  <div id="imagefile">
    <div id="vidoewrapper">
      <video id="thevideo" width="320px" height="240px" autoplay></video>
      <button id="sendbutton">Take Photo</button>
    </div>
    <canvas id="thecanvas"></canvas>
  </div>


</body>

</html>

 

 

Live Web / Socket.io – Lame Chat

As a practice for using socket.io, I made a lame chat deliberately, limiting the choice of words to use. Using twitter streaming API, the app will give you choice of around 200 words from recent twitter within NYC area. You can make a sentence by clicking buttons of words one by one. Still it’s in pretty much conceptual stage, but it might become intriguing app to play with if it’s further developed.

Screen Shot 2017-09-19 at 1.06.00 AM

//server.js

// Twitter part
var Twit = require('twit')

var T = new Twit({
  consumer_key: "ufDGsqlymEGNVLe1aws4H36D1",
  consumer_secret: "LO1Ue4ylzgLFx5AgKR0ABi1bLtBO9HKww5sxbkiwQjqRK5ubyY",
  access_token: "836034020173623296-kfxoDIqip2rhByuadNEgEEANWMOjCl8",
  access_token_secret: "nwhboLj5z6l62cRLdJY9ME3H2Wz7RO8YJRz9EyJdm1C4g",
  timeout_ms: 60 * 1000, // optional HTTP request timeout to apply to all requests.
})

var finalWords = [];

function getTweet() {
  // filter the public stream by the latitude/longitude bounded box of NYC
  var nyc = ['-74.2591', '40.4774', '-73.7002', '40.9162']
  var stream = T.stream('statuses/filter', {
    locations: nyc
  })

  stream.on('tweet', function(tweet) {
    var text = tweet.text;
    text = text.split('http')[0];
    text = text.split('RT')[0];
    text = text.split('@')[0];
    if (text !== "") {
      var words = text.split(' ');
      Array.prototype.push.apply(finalWords, words);
    }
    console.log(finalWords);
    console.log(finalWords.length);

    if (finalWords.length >= 200) {
      stream.stop()
      console.log('stopped');
    }
  })
}


// Chat server part

var app = require('express')();
var http = require('http').Server(app);
var io = require('socket.io')(http);

app.get('/', function(req, res) {
  res.sendFile(__dirname + '/index.html');
});

io.on('connection', function(socket) {
  getTweet();
  console.log("We have a new client: " + socket.id);

  // Send picked words to client
  io.emit('selectedWords', finalWords)

  socket.on('disconnect', function() {
    console.log('user disconnected');
  });

  // Share messages to all
  socket.on('chat message', function(data) {
    console.log("Received: 'chat message' " + data);
    io.emit('chat message', data);

  });
});


http.listen(3000, function() {
  console.log('listening on *:3000');
});
// index.html 


<!doctype html>
<html>

<head>
  <script src="/socket.io/socket.io.js"></script>
  <title>NYC Lame Chat</title>
  <style>
    * {
      margin: 0;
      padding: 0;
      box-sizing: border-box;
    }

    body {
      font: 13px Helvetica, Arial;
    }

    #sendButton {
      width: 100%;
      height: 20px;
      font-size: 13px;
      background: lightgray;
    }

    .wordButton {
      height: 20px;
      font-size: 13px;
    }

    #messageSection {
      width: 100%;
      height: 400px;
      background: black;
      color: white;
      word-wrap: break-word;
      overflow-wrap: break-word;
      font-size: 15px;
    }


    #buttonSection {
      text-align: center;
    }
  </style>
</head>

<body>
  <div id="messageSection"></div>
  <div id="inputSection">
    <div id="buttonSection">
    </div>
    <button id="sendButton" onclick="sendMessage()"> send </button>
  </div>

  <script>
    var socket = io();

    socket.on('connect', function() {
      console.log("Connected");
    });

    // Receive words from recent tweets in NYC
    socket.on('selectedWords', function(data) {
      console.log(data);
      for (var i = 0; i < data.length; i++) {
        var text = document.createTextNode(data[i]);
        if (text !== "") {
          var button = document.createElement("button");
          button.setAttribute("class", "wordButton")
          button.appendChild(text);
          button.addEventListener('click', addWords);
          document.getElementById("buttonSection").appendChild(button);
        }
      }
    });

    // Receive message
    socket.on('chat message', function(data) {
      console.log(data);
      document.getElementById('messageSection').innerHTML = document.getElementById('messageSection').innerHTML + " " + data;
    });

    // Put a word on server when choosing(clicking) it.
    function addWords() {
      console.log("chat message: " + this.innerHTML);
      socket.emit('chat message', this.innerHTML);
    }

    // Divide messages by adding line breaker. 
    function sendMessage() {
      console.log("send message");
      socket.emit('chat message', "< <br /><br />");
    }
  </script>

</body>

</html>

 

 

 

Live Web – an example of live web & self portrait

An example of live web – Your world of text

One of my favorite. It’s endless, chaotic, collaborative, random, useless and hollow like web itself. Your World of Text is an infinite grid of text editable by anyone. The changes made by other people appear on your screen as they happen. You can infinitely scroll through the world to see people’s scribbles. It’s a live gigantic empty sketchbook for everyone or just you. You can create your own URL to go to a new world starting off blank like http://yourworldoftext.com/liveweb. Strictly, it might not fall into category of live web, but one of the features forces me to place it in. Being raw without any kind of censorship prior to be disclosed is dangerous but meaningful thing about being live.

 

Self portrait – Something I like

With using html5 video tag and simple click interaction, I made a single page web of gifs collection from what I like. When clicking one of gifs, it will be highlighted with playing video while other are paused and faded out. Here is the link of demo.

self1

self2

<!DOCTYPE html>
<html>

<head>
<meta charset="utf-8">
<link rel="stylesheet" type="text/css" href="style.css">
<script src="https://ajax.googleapis.com/ajax/libs/jquery/2.2.0/jquery.min.js"></script>
<link href="https://fonts.googleapis.com/css?family=Raleway:900" rel="stylesheet">
<style>
body {
margin: 10px;
}

.wrapper {
width: 810px;
display: grid;
grid-gap: 3px;
grid-template-columns: 200px 200px 200px 200px;
background-color: #fff;
}

.snippet {
font-family: 'Raleway', sans-serif;
color: #fff;
font-size: 20px;
width: 150px;
height: auto;
padding-top: 10px;
padding-left: 10px;
position: fixed;
z-index: -1;
}

.box {
background-color: #fff;
border-radius: 0px;
margin: 0;
width: 200px;
height: 130px;
overflow: hidden;
}

.video {
margin: 0;
width: 135%;
transform: translateX(-10%) translateY(-10%);
height: auto;
}
</style>
<title>Self Portrait of Younho</title>
</head>

<body>
<div class="wrapper">
<div class="box">
<div class="snippet">Yeah I was a mad man</div>
<video class="video" src="https://media.giphy.com/media/39TWBQZ296G40/giphy.mp4" autoplay="" loop=""></video>
</div>
<div class="box">
<h1 class="snippet">Love Knicks as much as I hate it</h1>
<video class="video" src="https://media.giphy.com/media/ym3yVHEIgWEG4/giphy.mp4" autoplay="" loop="">
</div>
<div class="box">
<h1 class="snippet">I'm a part time nanny..or fulltime</h1>
<video class="video" src="https://media.giphy.com/media/13AXYJh2jDt2IE/giphy.mp4" autoplay="" loop="">
</div>
<div class="box">
<h1 class="snippet">You should try it!!!</h1>
<video class="video" src="https://media.giphy.com/media/3ofSB1BswqflLVt4ic/giphy.mp4" autoplay="" loop="">
</div>
<div class="box">
<h1 class="snippet">Yes I watched the whole season again</h1>
<video class="video" src="https://media.giphy.com/media/UHLtCLwRsbDFK/giphy.mp4" autoplay="" loop="">
</div>
<div class="box">
<h1 class="snippet">Black. Iced. Need it now</h1>
<video class="video" src="https://media.giphy.com/media/l41lRi0VWdnH90yJy/giphy.mp4" autoplay="" loop="">
</div>
<div class="box">
<h1 class="snippet">Kind of blue : all time favorite</h1>
<video class="video" src="https://media.giphy.com/media/u0SaOREpBwSTC/giphy.mp4" autoplay="" loop="">
</div>
<div class="box">
<h1 class="snippet">Time to go home...</h1>
<video class="video" src="https://media.giphy.com/media/3oriffVYPYhQbDRe3C/giphy.mp4" autoplay="" loop="">
</div>
<div class="box">
<h1 class="snippet">I had a dream...</h1>
<video class="video" src="https://media.giphy.com/media/3oz8xFHHldmilI2Was/giphy.mp4" autoplay="" loop="">
</div>
<div class="box">
<h1 class="snippet">Who doesn't???</h1>
<video class="video" src="https://media.giphy.com/media/CdqNOCOc8Lcw8/giphy.mp4" autoplay="" loop="">
</div>
<div class="box">
<h1 class="snippet">Shouldn't have started it...</h1>
<video class="video" src="https://media.giphy.com/media/vyPLDShqm3j5S/giphy.mp4" autoplay="" loop="">
</div>
<div class="box">
<h1 class="snippet">...of course I'm joking</h1>
<video class="video" src="https://media.giphy.com/media/26tn33aiTi1jkl6H6/giphy.mp4" autoplay="" loop="">
</div>
<div class="box">
<h1 class="snippet">No.1 destination</h1>
<video class="video" src="https://media.giphy.com/media/3o85xJWqnjH1Xu5rmE/giphy.mp4" autoplay="" loop="">
</div>
<div class="box">
<h1 class="snippet">lol</h1>
<video class="video" src="https://media.giphy.com/media/1mtKnWJVTpUKQ/giphy.mp4" autoplay="" loop="">
</div>
<div class="box">
<h1 class="snippet">It's not pink...</h1>
<video class="video" src="https://media.giphy.com/media/k0H9IeP5g4O52/giphy.mp4" autoplay="" loop="">
</div>
<div class="box">
<h1 class="snippet">Don't overcook please...</h1>
<video class="video" src="https://media.giphy.com/media/xUA7aU4305QHkswlji/giphy.mp4" autoplay="" loop="">
</div>
</div>
<script type="text/javascript">
$('.video').each(function(e) {
$(this).on("click", pause);
$(this).on("mouseout", play);
});

var r, g, b;

function pause(e) {
randomNumber();
this.parentNode.style.backgroundColor = 'rgb(' + r + ',' + g + ',' + b + ')';
this.style.opacity = 0.4;
$('.video').not(this).each(function() { $(this).get(0).pause(); });
$('.video').not(this).each(function() { $(this).get(0).style.opacity = 0.15; });
$('.video').not(this).each(function() { $(this).get(0).parentNode.style.backgroundColor = 'white'; });
this.parentNode.children[0].style.zIndex = 2;
}

function play(e) {
this.play();
this.parentNode.style.backgroundColor = '#fff';
this.style.opacity = 1;
$('.video').each(function() { $(this).get(0).play();});
$('.video').not(this).each(function() { $(this).get(0).style.opacity = 1; });
$('.video').not(this).each(function() { $(this).get(0).parentNode.style.backgroundColor = '#fff';});
this.parentNode.children[0].style.zIndex = -1;
}

function randomNumber() {
r = Math.floor(Math.random() * 255);
g = Math.floor(Math.random() * 255);
b = Math.floor(Math.random() * 255);
}
</script>
</body>

</html>

 

 

[Data Art] Lost in Translation

As the final project for the class, Data Art, I and Ellen are teamed up and were discussing about the project through the mobile messenger. While we were having conversation in Korean, we found out we were using lots of ‘ㅋ’ as most Korean does.

Screen Shot 2017-05-06 at 11.35.57 AM

Naturally the discussion went to emoji, another kind of abstract, compact literation widely used in mobile. We are using it unconsciously everyday, but we might not perfectly understand the meaning of it, even though it is supposed to mean certain thing. We thought there’s complicated emotion behind these kinds of literation in mobile, leading ‘lost in translation’. We decided to analyze the ‘ㅋ’ as a prototype project, collecting and representing data of emotion behind it.

Screen Shot 2017-05-06 at 11.36.24 AM

‘ㅋ’ is originally literation of sound of laughter, something similar with ‘kekeke’ in English. But people use it far beyond just laughter, each person using it differently. There’s even joke about different meaning according to the number of it.

Screen Shot 2017-05-06 at 11.36.12 AM

Screen Shot 2017-05-06 at 11.36.19 AM

We wanted to find undertone of this letter, so we did a self-analysis to see emotion behind this letter by breaking down into several emotion from joy to fear. The criteria we picked is a standard in computer based emotion analysis. But it’s done by us, human being. The process of this human analysis was interesting to both of us, as we felt there’s misinterpretation when we did the each of own project with the computerized sentimental analysis.

Screen Shot 2017-05-06 at 12.09.54 PM

We also did a mutual analysis. Each letter was analyzed by myself first, and then also by opponent again so that we could figure out difference between intention of sender and interpretation of receiver.

Screen Shot 2017-05-06 at 11.40.23 AM

We set rules, guidelines of visualization, how to represent the emotion. The amount of each emotion decides each typographical feature of it, which are size, width, weight, and shear. Base on the result of analysis, each letter had it’s own unique form. In doing so, we wanted to embed complicated emotion in the letter itself.

Screen Shot 2017-05-06 at 11.40.27 AM

Below is the summary of the result. Every letter has distinctive form, different from each other. We found several interesting take away from the result. 

Screen Shot 2017-05-06 at 11.40.33 AM

First, as we guess before we start, there’s indeed lost in translation. There is huge gap between the intention and the interpretation. Secondly, it actually reflects the personality of one who uses it. Almost every form of my letter is quite similar no matter what the context of the phrase is. We can assume each person use this letter with similar emotion regardless of the context. We assured this little theory once again when we saw the result in a cross way. For example, my analysis of my own word is similar with my interpretation on Ellen’s word, meaning that I used this letter and also recognized this letter in my own way with certain emotion. The similarity of form didn’t depends on the context but rather on the person. So our conclusion of this weird experiment is that this letter contains quite diverse emotion behind it, and it’s mainly affected by each one’s personality.

Screen Shot 2017-05-06 at 11.40.37 AM Screen Shot 2017-05-06 at 11.40.41 AM Screen Shot 2017-05-06 at 11.40.47 AM

We thought it makes sense if it is made into some kind of accessories as it strongly show the personality of one who use it. We prototyped it by doing a laser cut to make elements of design of any kind of thing to represent the personality. This small prototype experiment between two of us would be extended to large number of people, so more meaningful or interesting results might be unfolded in the next step. 

Screen Shot 2017-05-06 at 11.40.53 AM

[AOAC Final] Walk by Walk

For this semester, I have been exploring something about location, as I believe the aspect related to location is one of the most important thing about mobile.

Background #1

The project was initiated by thinking about myself – what I need most these day. I definitely need some exercise! After coming ITP, (or even before..) I don’t exercise enough. I should walk more and need some stimulus to do more. But I don’t like treadmill and prefer to walk outside. Though I took fitbit all the time as a stimulus, it could be ignored and no one blames about it. Plus walking alone is boring.

Background #2

I’ve been deeply into a mobile game, Clash Royal, whose genre is a realtime 1:1 battle. The reason why I love this? I could completely focus on it for 3 minute without distracting due to the aspect of competition. The fact that I’m able to quickly and casually meet and share the such short but intense moment with someone else completely unknown in some remote area is only possible with mobile. 

Idea 

Walk by Walk is a realtime 1:1 walking game. It is a competition as trigger to walk(exercise) more. It will enable you to connect to someone wherever they are. It will allow us to share experience of walking and spatial memory. To be more immersive about sharing the experience and feeling data more lively, I take advantage of Google Street View API to harness substantial amount of image data from it. Two people can share spatial memory through the visual appreciation of the places from Street View.

final_presentation.001

final_presentation.008

 

How it works

Two different mobile phone’s data about location while they are walking will be transferred through realtime database (I used Firebase).

final_presentation.003 final_presentation.004 final_presentation.005 final_presentation.006 final_presentation.007

5th assignment : Ongoing project & Story Space

Ongoing project / 

I keep working on the project making storytelling with google street view. I was inspired by 9-eyes project, so I looked around some familiar place such as a place near my first job or my first date with my wife.I already had specific memory on the places so I can recall it, and put some fictional story on the scene. I tired to make a conversation when there are people in the scene, using text like a speech bubble in comic books.

4aeb12b341eb21566f7b78430a84797e

(the street where I did my first date)

Screen Shot 2017-04-26 at 11.01.46 PM

(the back street of the company I worked)

Screen Shot 2017-04-26 at 11.02.58 PM

I would make series of scenes having conversation on it, weaving the context from my memory and the circumstance in Street View.

 

Story space / 

There are two searchable elements in my project, which are the conversation and the place. When I create a new scene, a short snippet explaning the scene will be generated in the search column. For example,

— “Are you kidding me?” @ Jung-dong, Seoul 

Users are able to choose each story based on what kinds of conversations people on the scene have and where they are.

4th assignment : interaction

/Update about background/

I’m trying to use Google Street View API as a background. Like this project, there’s limitless options for background if I am able to integrated my 3D characters into the Street View. Same action, location, or relationship between characters could expand into countless storytelling in different location in the world.

 

/Interaction/ 

I did 2 tests for the interaction. Check here test site for demo.

# location : user’s current location or searching some place would be the interaction element that decides the background. It could be my daily places or completely unknown and imaginative places. (reference code for google street view panorama code)

# moving the foreground element with mouse : As I couldn’t use Kinectron for this week, I tested with mouse to move foreground elements. Since I haven’t decided the foreground elements, I just used one of the three.js object. As mouse is moved, the object follows.

Screen Shot 2017-04-19 at 11.17.24 PM Screen Shot 2017-04-19 at 11.22.38 PM Screen Shot 2017-04-19 at 11.18.58 PM

3rd assignment : Creating 3D objects with Fuse

For this week’s assignment, I tried to create foreground objects using Fuse. It’s not hard to build the object as fuse’s UI is quite straightforward. I made bunch of characters with different animations and exported them to ‘.dae’ file as to include them into three.js background.

Screen Shot 2017-04-12 at 6.11.56 PM Screen Shot 2017-04-12 at 6.29.19 PM

First problem happened when using the Collada example. It was worked when using only the character, but didn’t work if I imported it including animations with the error message “couldn’t find joint mixamo rig…”. I’ve explored google for a while, but couldn’t figure it out how to fix it. I guess ColladaLoader.js isn’t perfectly compatible with Mixamo’s ‘.dae’ exporting.

Screen Shot 2017-04-12 at 6.28.29 PM Screen Shot 2017-04-12 at 6.27.15 PM

Secondly, it’s little bit tricky to combine the foreground object and the background 360 image in a single js and html. I could grasp the way it works for each of it, but had a difficult time to set(init) two things at the same time, as I’m still not used to three.js.

I’m still trying to get used to technical part of each elements, and yet decided what story I would design. I’m kind of fascinated by the fact that how easily I could create delicate 3D characters in Fuse, so I’m thinking about creating several distinctive characters using Fuse, giving users different options about the background to make variations of storytelling.

[Data Art] 3rd assignment – UNTAKEN

Thanks to Google Street view, we could explore the almost entire world in my own room only if I have a mobile phone. Before I came to NYC, I already experienced the central park’s great lawn. But is it really the central park? A 360 view is enough to seem to be like real but not enough to be real. I can see that place, but don’t feel or experience it. I kind of dislike Google Street View, because it stop us exploring more places physically.

When I played with Google’s Street View API, I found out there’s plenty of location points that are not taken by Google even in the big city like New York. In fact, I was glad to see the screenshot of ‘no result’. At least there’s reason for us to go that place to actually to experience and feel the place.

Screen Shot 2017-04-09 at 10.22.31 PM

Screen Shot 2017-04-09 at 10.25.47 PM

I made a simple tool to find untaken places using Google Street View API conversely. When you pick one city, it will randomly find you one of the place without the street view image within the city. As a prototype, I chose 6 cities from highly developed cities like New York expected not to have any single untaken point, which was not true, to barren or infamous? cities such as Bagdad where we hardly expect for Google to take photo of it.

HT1tAM_FqFdXXagOFbX3

Dutchman_in_telescope

In order to make it feel like seeing the untaken place from distance before exploring, the zoomed satellite image appears as if pirates uses a telescope. Of course, the satellite image is also digitized one, but at least it triggers some amount of curiosity for us to go there physically.

While exploring the results, I somewhat relieved there’s still lots of places that is not digitized so that it is yet to be defined in single photo with one instant moment.

Screen Shot 2017-04-09 at 10.28.54 PM

Screen Shot 2017-04-09 at 10.29.59 PM

Screen Shot 2017-04-09 at 10.30.23 PM

Screen Shot 2017-04-09 at 10.30.31 PM

Screen Shot 2017-04-09 at 10.31.46 PM

Screen Shot 2017-04-09 at 11.20.12 PM

project website

source code 

 

2nd assignment : 360 photo as a background

We commonly imagine some kind of vast scenery of nature when it comes to 360 degree photo or video. Even if it’s armed with new technology, we are not shocked that much because we’ve already experienced those kinds our scenes countless times with out entire senses. I rather wanted to capture uncommon scene in life, contrary to the vastness, which is the inner part of superficial manmade objects, such as a microwave, or a refrigerator? Those are too small or awkward to go in, so we never know the feeling of inside them. I expected weird emotion that haven’t felt before from confinedness and distored vision of materials such as plastics and wires.The result? Seized by a little discomfort and strangeness, as if in a spaceship?

 

[microwave]Screen Shot 2017-04-05 at 10.44.27 PM

Screen Shot 2017-04-05 at 11.08.42 PM

 

[oven]

Screen Shot 2017-04-05 at 10.46.02 PM

Screen Shot 2017-04-05 at 11.08.11 PM

 

[refrigerator]

Screen Shot 2017-04-05 at 10.47.04 PM Screen Shot 2017-04-05 at 10.47.14 PM

 

BLE + Arduino 101 demonstration : saving the location with a button

Continuing BLE demonstration with Arduino 101 and as a part of my final project test, I made a simple app to store the current location when clicking the button of Arduino. CurieBLE plugin works mostly fine, but it takes time to catch up the name of BLE after assigning it from Arduino code. So I changed it to match the ID of BLE device instead of name to connect it instantly.

arduino101

1st assignment : sequelize

A good story creates a sequel, and vice versa. All kinds of data – characters, history, plot, etc – are basis for expansion and possibilities. Fanfiction proves it. There’s a collective process of making narratives. But our imagination somehow is limited. Arbitrary things  would sometimes inspire us to go different direction.

So, here is a bit of silly trial to make a sequel of Star Wars. It could be anyone to be part of this serialization process that each sentence, one by one, is generated by each person. But not entirely from your imagination. Half of it is from yours with proposed keywords, but leave half of it for more possibilities. When you input a keyword, it will search tweets related to the keyword. Carefully looking in each tweet, and if you’re sure it is a hell of perfect phrase for the following sentence, save it to share the story. Then just wait for next lines.

sequelize1sequelize2 sequelize3

project website

source code

BLE + Sensortag demonstration : election

Here comes an unexpected early presidential election in Korea.Time to decision without regrets. We all have to contemplate pretty seriously this time not to make a mistake again. We are all absorbed in inquiring them and looking high and low. Viewing candidates from various angles.

As an analogy of it, I mapped values of the accelerometer-different sets of x,y,z- into opacity of images of the current top 3 candidates, creating a vibrant confusion just as the state of ours.

source code

 

[Data Art] 2nd assignment – ARTicle

Articles in news website is text that we most frequently see in everyday life. It’s primarily regarded as source of objective information, but there’s strong, subjective emotion that comes out very often, especially these days of huge confliction between the media and the government. Articles sometimes seem to go beyond just text, but rather sort of an artwork filled with powerful, complex feelings. ARTicle is an chrome extension that helps emotional appreciation of an article while you are reading a news in NYT.

ARTicle_final.001 ARTicle_final.002 ARTicle_final.003_reARTicle_final.004 ARTicle_final.005

source code

App development : first step of getting place data as the midterm

Screen Shot 2017-03-07 at 5.15.59 AM

Screen Shot 2017-03-07 at 5.16.04 AM

I changed a little bit about the concept, since most of tweet or news data’s geo tag is null. So I need to gather place information first, and then I can have a specific place keyword for searching tweet or news. To get information about places around where you are, I used Google reverse geocoding, place and street view API. For initial setting, clicking ‘somewhere’ button triggers to load everyplace within 100 meter with rating over 4 point and show it descending order with specific address and street view image as a background.

Hacking the browser, week5

I’m planning to create a chrome extension for providing new way of reading a newspaper article. It will deconstruct, analyze and restructure text of the article.

These are steps for final project.

#1 analyze – I will use IBM Watson’s AlchemyLanguage API to analyze text, extracting keyword and sentiment. I need a real-time database to store and exchange text data since Watson API is only run in Node.js. It usually takes a bit of time for Watson to analyze data, so I might build a structure of delay during transmitting and retrieving data.

#2 restructure – I will show the analysis result with key implications and emotion behind it and every word of the article in an abstract way(thinking of mapping emotion into color for background or reinterpreted text). After collecting data, the entire content of the page will be removed, and then the reinterpreted result will be shown on the page again.

#3 deconstruct – I will collect only body text of the article. It’s quite easy to get each body text in the article, as it is usually classified with certain class name. The only tricky part is each news paper website has its own class name, so I might use conditional statement through recognizing the url of the site.

# Chrome APIs & permissions : chrome.tabs (active tabs) to get information of a webpage including url and control browser action.

# Browser action : Clicking the browser action triggers those 3 steps.