Initial Dialogs

[11-02-19 8:46:23 PM] MN: I’ve been thinking more about our project and looking around the web for more ideas

[11-02-19 8:46:34 PM] MN: I still really like the idea of combining Max with Unity

[11-02-19 8:46:50 PM] SKC: so, what is your idea so far? h

[11-02-19 8:46:56 PM] SKC: i have some thoughts..

[11-02-19 8:47:44 PM] MN: I like the idea of combining some of the conversations we have had with the Lucid Dreaming

[11-02-19 8:48:07 PM] MN: creating an installation that deals with dreams or memories

[11-02-19 8:48:16 PM] MN: its very subjective but I think it has potential

[11-02-19 8:48:25 PM] SKC: i was wondering, what you think about putting 4 speakers in closer to the installation component and 4 further out at 45 degrees angles (remote)

[11-02-19 8:49:03 PM] MN: hmmm…what are you thinking of achieving in terms of an audio effect?

[11-02-19 8:49:37 PM] MN: it won’t really make a difference except that the farther speakers will give you better frequency definition in the lower frequencies (bass)

[11-02-19 8:50:15 PM] SKC: i was wondering if there will be a greater perceptual 3d depth by physically moving some other speakers further out

[11-02-19 8:50:59 PM] SKC: and because people interaction/or observing the installation might wonder through the sound space

[11-02-19 8:51:19 PM] MN: to get proper depth control depends on how you setup your mix…speaker positions help but its really what you do to the sounds using production techniques that make a difference

[11-02-19 8:51:28 PM] MN: I like that idea and its definitely possible

[11-02-19 8:51:52 PM] MN: I have been also looking into building environments in unity that look super abstract like this: http://vimeo.com/12700368

[11-02-19 8:53:05 PM] MN: I think the bottom line is between you and I we have enough technical skills to pull off a lot of stuff…the question is what is our concept? what are the underlining themes we want to demonstrate in our project?

[11-02-19 8:53:10 PM] SKC: yeah.. i have a narrative that i am interested in exploring that speaks about multiple layers of physical space like underground for instance, but that is largely represented by video of abstracted faces

[11-02-19 8:54:03 PM] SKC: so when i say narrative, i am thinking this could be recorded speech

[11-02-19 8:54:20 PM] MN: ok but what is it about?

[11-02-19 8:54:21 PM] SKC: which moves around in 3d space depending on various bio data

[11-02-19 8:54:51 PM] SKC: it is about the ‘corner monster’

[11-02-19 8:55:06 PM] MN: :o what?

[11-02-19 8:55:10 PM] MN: lol

[11-02-19 8:55:18 PM] MN: can u explain the idea a bit?

[11-02-19 8:56:09 PM] MN: it reminded of this website: http://www.rmx.cz/monsters/

[11-02-19 8:58:37 PM] SKC: a representative entity of human affective interaction with sensual perception / it is not that cute lol / and i am thinking that it could be interesting two screens behind the two interacting participants (this is a current model) so that each person is constantly distracted from direct interaction by the mediated interaction

[11-02-19 8:59:40 PM] SKC: what is gonna be on the screen could be visual representations of the spoken narrative, but in a different time and space

[11-02-19 9:00:06 PM] MN: so how are the users interacting?

[11-02-19 9:00:53 PM] MN: so you want to have 2 screens in front of the users and 2 behind them?

[11-02-19 9:01:32 PM] SKC: hang on, drawing will come…

[11-02-19 9:01:52 PM] MN: lol..ok

[11-02-19 9:08:43 PM] SKC: sorry, it is an awful drawing as an artist, but it may give you a quick sense what i am imagining..

[11-02-19 9:08:47 PM] SKC posted file rough.jpg to members of this chat

[11-02-19 9:09:20 PM] MN: :)) that’s cute

[11-02-19 9:09:23 PM] SKC: the reds are possible two participants, and blues are screens with video

[11-02-19 9:09:28 PM] SKC: sorry.. lol

[11-02-19 9:09:39 PM] MN: so 1 screen in front and one behind

[11-02-19 9:09:54 PM] SKC: green rectangle is (may be) a table

[11-02-19 9:10:03 PM] MN: how does bio feedback fit in the installation?

[11-02-19 9:11:04 PM] SKC: they will both have gsr, but one gsr will control the spoken voices sequencing, and the other gsr will control spacial positioning of the ‘corner monster’

[11-02-19 9:11:28 PM] MN: are the corner monsters visual or an auditory effect?

[11-02-19 9:11:38 PM] MN: or both?

[11-02-19 9:12:44 PM] SKC: auditory, abstract,

[11-02-19 9:13:00 PM] MN: ok good…I don’t think I can animate monsters :D

[11-02-19 9:13:16 PM] MN: so back to the GSR

[11-02-19 9:13:36 PM] SKC: but the how the visual component affect the participants will be another factor modulating the biofeedback

[11-02-19 9:13:37 PM] MN: so if there is stimulation, then it triggers a new narrative?

[11-02-19 9:14:00 PM] SKC: yeah,

[11-02-19 9:14:12 PM] SKC: more accurately

[11-02-19 9:14:27 PM] MN: ok…I still don’t know what the narrative is about or what corner monsters are…

[11-02-19 9:14:46 PM] SKC: the 3d spatial position and the positoin in the textual sequencing and speed

[11-02-19 9:15:43 PM] SKC: ok… conrer monster is ties with the writings about panoptic/holoptic perception

[11-02-19 9:16:05 PM] SKC: the coner monster is us the thing we never see

[11-02-19 9:16:44 PM] MN: ah yes ok

[11-02-19 9:17:33 PM] SKC: i think we need some conceptual basis for bio interactive contextualization

[11-02-19 9:17:50 PM] SKC: so that is what my narrative is about…..

[11-02-19 9:18:01 PM] SKC: in an abstract sense….

[11-02-19 9:18:52 PM] SKC: i have also been reflecting on what you have told me about your work so far and this seemed a possible integration that allows us both to pursue our research while still collaborating

[11-02-19 9:19:54 PM] MN: ya..I like the idea using panoptic for sound….essentially we dedicate 4 speakers for soundscapes and the other 4 for dialogue and music (in the form of acousmatic)

[11-02-19 9:20:26 PM] SKC: by the way i can make a virtual monster model (3d studio) but i don’t think this is necessary, I’m more interested in abstraction of the idea not literal forms

[11-02-19 9:20:31 PM] MN: all the sounds will be dynamic…that means we get realtime panning, reverberation and volume control…producing the essence of depth

[11-02-19 9:20:36 PM] SKC: yeah, that sounds good

[11-02-19 9:20:46 PM] MN: no I don’t we need to too literal about the monsters

[11-02-19 9:21:21 PM] MN: I think of it as the idea of humans being some what primitive as to the way we perceive things

[11-02-19 9:21:38 PM] SKC: agree,

[11-02-19 9:21:43 PM] MN: and that could be holoptic

[11-02-19 9:21:54 PM] SKC: yes

[11-02-19 9:22:04 PM] MN: ok…I like this ides :)

[11-02-19 9:22:23 PM] MN: I was concerned at first that we were too abstract but I think we can explain the idea

[11-02-19 9:22:30 PM] MN: so lets talk about the visuals

[11-02-19 9:22:39 PM] MN: which I think would be the most challenging

[11-02-19 9:22:56 PM] MN: there is a space which they will be immersed in..right?

[11-02-19 9:23:03 PM] MN: an abstract terrain?

[11-02-19 9:23:49 PM] MN: plus imagery on top (video clips?)

[11-02-19 9:26:50 PM] SKC: i had thought about an abstract terrain models, i am interested in layers of information under the surface of things, for instance like the earth, however after a lot of consideration about the data (how we get it), the meaning of data with the complexity of this timeline, that the idea is better represented with a very minimalist representation, like one part of human body

[11-02-19 9:27:54 PM] MN: I really like this kind of idea for an environment: http://www.fractal-recursions.com/files/fractal-05170508v.html

[11-02-19 9:28:48 PM] MN: or this http://www.youtube.com/watch?v=bMk8cA099xc&feature=player_embedded

[11-02-19 9:28:53 PM] SKC: this would mean very little 3d, if any, data in unity, meaning you can focus on bio modulation of audio.. .. therefore i was imagining projecting simple video onto possibly a sculptural transparent screen

[11-02-19 9:29:20 PM] MN: ya that could work…I’m still learning terrain building in unity

[11-02-19 9:29:21 PM] SKC: are you interested in fractal?

[11-02-19 9:29:37 PM] MN: I like fractals but I don’t know how well I can pull it off in unity

[11-02-19 9:30:19 PM] SKC: if you can think of any relevant terrain model to our project, otherwise it just becomes pretty pictures, i have found that to be a dangerous road to go down..

[11-02-19 9:30:36 PM] MN: I know exactly what you mean!

[11-02-19 9:31:10 PM] MN: we can keep the visuals very minimal…and I really like the idea of experimenting with projecting on different surfaces….

[11-02-19 9:31:18 PM] SKC: for instance, i can make you terrain that you can import, but why should we do this, if we are making it all up, it has no real relation or content to bio data

[11-02-19 9:31:42 PM] MN: no exactly…it has to relate back to our concept

[11-02-19 9:31:53 PM] MN: but we have to think of something for the visuals

[11-02-19 9:32:09 PM] MN: unless we go with sound only!

[11-02-19 9:32:17 PM] SKC: yes, and we can make the different speed/sequencing of narrative between video and audio, and see how the participants bio data get altered or shifted based on the arousals inherent in the narrative

[11-02-19 9:32:41 PM] SKC: yes, there will be visuals on the screens (the blue one in the rough image)

[11-02-19 9:33:03 PM] SKC: behind the participants, I’m thinking transparent, so it is viewable both side

[11-02-19 9:33:17 PM] MN: by the way we can import movies into unity and set them up as textures

[11-02-19 9:33:55 PM] MN: the other option is we can setup video cams and modify the image of the participants in realtime based on the bio feedback readings

[11-02-19 9:34:57 PM] SKC: I’m thinking, lets say, simply a speaking mouth of the narrative, it gives a linear sense of the story, people tend to assemble meanings from mouth shapes, but these shapes won’t necessarily agree with what they hear, this would be projected on sculptural screens

[11-02-19 9:35:36 PM] MN: the only issue is the synchronization of the mouth movements with the audio

[11-02-19 9:36:02 PM] MN: also we have to be careful because controlling the semantics of speech is a very complex process

[11-02-19 9:36:23 PM] MN: a simple speed is ok but doesn’t have any “wow” factor

[11-02-19 9:36:32 PM] MN: i meant speed change

[11-02-19 9:36:47 PM] SKC: i also though about that idea of projecting the participants own face or body, but that has a meaning of mirroring, people tend to react different when they see themselves in the mirror or when they see somebody else trying to speak something

[11-02-19 9:37:03 PM] MN: for sure…I agree

[11-02-19 9:37:09 PM] SKC: ok, there is no synchronization between video and audio,

[11-02-19 9:37:19 PM] MN: oh good

[11-02-19 9:37:25 PM] MN: that’s a big relief

[11-02-19 9:37:55 PM] MN: with the speech, we can control distance…making it seem farther away or closer and spatial positioning

[11-02-19 9:37:56 PM] SKC: video gives the linear sense of the narrative, and the audio positioning will modulated by the bio data of the participants who are interacting with each other and the screens

[11-02-19 9:38:02 PM] SKC: yes,

[11-02-19 9:38:46 PM] MN: I think what would add greater depth to the visuals is that if we layer some kind of imagery behind the mouth as the narrative plays out

[11-02-19 9:38:51 PM] MN: what do u think?

[11-02-19 9:39:29 PM] MN: the imagery is linked to certain events in the story

[11-02-19 9:40:01 PM] MN: it comes in only at certain key points in the story

[11-02-19 9:43:04 PM] SKC: it could be interesting to give other layers of imagery, I’m just worrying about that it could be too complex for the visual part, we can modify and see if it seems appropriate as we progress

[11-02-19 9:43:19 PM] MN: for sure..

[11-02-19 9:43:20 PM] SKC: but if we do that, i agree it should abstract and only intermittent

[11-02-19 9:43:37 PM] MN: can u email me the script?

[11-02-19 9:44:09 PM] MN: I need to get a sense of what we can do in terms of the sound design plus I plan to work on the 811 write up tomorrow

[11-02-19 9:44:32 PM] SKC: it is in my notebook, i need to type them, i will try to send some portions tonight….

[11-02-19 9:44:40 PM] SKC: it is pretty Kafka

[11-02-19 9:45:08 PM] MN: ya I don’t need a lot just a sample so I can visualize it a bit more for myself

[11-02-19 9:45:39 PM] MN: plus it will give me a better sense of what the role of unity and max would be for the project

[11-02-19 9:45:50 PM] SKC: yes, yes,

[11-02-19 9:47:26 PM] MN: well the idea is much more clear…some of the issues that need to be determined is what parameters the GSR is controlling, the type of soundscapes that would work, synchronization of biofeedback data with sound and visualize, the type of equipment we need, cpu horsepower!

[11-02-19 9:47:52 PM] MN: and content! plus how we plan to split up the work

[11-02-19 9:49:23 PM] MN: we don’t need to figure all of these things tonight but I will formulate some more ideas after I sip a bit more wine and sleep on it a bit ;)

[11-02-19 9:49:42 PM] SKC: i could work with the abstract sound of corner monster in max (maybe) and you could work with other spoken narratives in unity from bio feedback data

[11-02-19 9:50:14 PM] MN: ya I mean we have to see which tool is more appropriate for the content….

[11-02-19 9:50:32 PM] MN: I need to do a bit more research on the audio side of unity

[11-02-19 9:51:59 PM] SKC: i think that is good thing to focus on and i will get the script organize, do some max exploration also, and some video experiments

… conversation wanders and finishes…

About Suk Kyoung Choi

artist / researcher

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

%d bloggers like this: