5th assignment : Ongoing project & Story Space

Ongoing project / 

I keep working on the project making storytelling with google street view. I was inspired by 9-eyes project, so I looked around some familiar place such as a place near my first job or my first date with my wife.I already had specific memory on the places so I can recall it, and put some fictional story on the scene. I tired to make a conversation when there are people in the scene, using text like a speech bubble in comic books.

4aeb12b341eb21566f7b78430a84797e

(the street where I did my first date)

Screen Shot 2017-04-26 at 11.01.46 PM

(the back street of the company I worked)

Screen Shot 2017-04-26 at 11.02.58 PM

I would make series of scenes having conversation on it, weaving the context from my memory and the circumstance in Street View.

 

Story space / 

There are two searchable elements in my project, which are the conversation and the place. When I create a new scene, a short snippet explaning the scene will be generated in the search column. For example,

— “Are you kidding me?” @ Jung-dong, Seoul 

Users are able to choose each story based on what kinds of conversations people on the scene have and where they are.

4th assignment : interaction

/Update about background/

I’m trying to use Google Street View API as a background. Like this project, there’s limitless options for background if I am able to integrated my 3D characters into the Street View. Same action, location, or relationship between characters could expand into countless storytelling in different location in the world.

 

/Interaction/ 

I did 2 tests for the interaction. Check here test site for demo.

# location : user’s current location or searching some place would be the interaction element that decides the background. It could be my daily places or completely unknown and imaginative places. (reference code for google street view panorama code)

# moving the foreground element with mouse : As I couldn’t use Kinectron for this week, I tested with mouse to move foreground elements. Since I haven’t decided the foreground elements, I just used one of the three.js object. As mouse is moved, the object follows.

Screen Shot 2017-04-19 at 11.17.24 PM Screen Shot 2017-04-19 at 11.22.38 PM Screen Shot 2017-04-19 at 11.18.58 PM

3rd assignment : Creating 3D objects with Fuse

For this week’s assignment, I tried to create foreground objects using Fuse. It’s not hard to build the object as fuse’s UI is quite straightforward. I made bunch of characters with different animations and exported them to ‘.dae’ file as to include them into three.js background.

Screen Shot 2017-04-12 at 6.11.56 PM Screen Shot 2017-04-12 at 6.29.19 PM

First problem happened when using the Collada example. It was worked when using only the character, but didn’t work if I imported it including animations with the error message “couldn’t find joint mixamo rig…”. I’ve explored google for a while, but couldn’t figure it out how to fix it. I guess ColladaLoader.js isn’t perfectly compatible with Mixamo’s ‘.dae’ exporting.

Screen Shot 2017-04-12 at 6.28.29 PM Screen Shot 2017-04-12 at 6.27.15 PM

Secondly, it’s little bit tricky to combine the foreground object and the background 360 image in a single js and html. I could grasp the way it works for each of it, but had a difficult time to set(init) two things at the same time, as I’m still not used to three.js.

I’m still trying to get used to technical part of each elements, and yet decided what story I would design. I’m kind of fascinated by the fact that how easily I could create delicate 3D characters in Fuse, so I’m thinking about creating several distinctive characters using Fuse, giving users different options about the background to make variations of storytelling.

2nd assignment : 360 photo as a background

We commonly imagine some kind of vast scenery of nature when it comes to 360 degree photo or video. Even if it’s armed with new technology, we are not shocked that much because we’ve already experienced those kinds our scenes countless times with out entire senses. I rather wanted to capture uncommon scene in life, contrary to the vastness, which is the inner part of superficial manmade objects, such as a microwave, or a refrigerator? Those are too small or awkward to go in, so we never know the feeling of inside them. I expected weird emotion that haven’t felt before from confinedness and distored vision of materials such as plastics and wires.The result? Seized by a little discomfort and strangeness, as if in a spaceship?

 

[microwave]Screen Shot 2017-04-05 at 10.44.27 PM

Screen Shot 2017-04-05 at 11.08.42 PM

 

[oven]

Screen Shot 2017-04-05 at 10.46.02 PM

Screen Shot 2017-04-05 at 11.08.11 PM

 

[refrigerator]

Screen Shot 2017-04-05 at 10.47.04 PM Screen Shot 2017-04-05 at 10.47.14 PM

 

1st assignment : sequelize

A good story creates a sequel, and vice versa. All kinds of data – characters, history, plot, etc – are basis for expansion and possibilities. Fanfiction proves it. There’s a collective process of making narratives. But our imagination somehow is limited. Arbitrary things  would sometimes inspire us to go different direction.

So, here is a bit of silly trial to make a sequel of Star Wars. It could be anyone to be part of this serialization process that each sentence, one by one, is generated by each person. But not entirely from your imagination. Half of it is from yours with proposed keywords, but leave half of it for more possibilities. When you input a keyword, it will search tweets related to the keyword. Carefully looking in each tweet, and if you’re sure it is a hell of perfect phrase for the following sentence, save it to share the story. Then just wait for next lines.

sequelize1sequelize2 sequelize3

project website

source code