Tags

Monday, September 30, 2013

Life-based Systems Project Update 04

AH! IT WORKS!

I finally have a working version that actually plays back music!

The green circles are temporary, and are meant just to point out which nodes on the tree correspond to which notes being played. In the future, I would like them to appear more as "leaves", and I'm thinking about playing with radiating sine waves that pulse out when the nodes play.

NOTE: In the non-debug mode, the text overlay is not present and the display is full screen.



This gets me closer, but I still have a long way to go.

Issues

  • The code is such that (because I was in a rush for critique) only one of these systems can exist at once. I very much want a whole garden of these, but I'm taking it a step at a time.
  • The system is static; I want for the tree to draw itself as the notes play, but right now, the entire tree is displayed all at once. 
  • The input L-system is fixed, and requires changes in code to make new systems.
  • Visibly, circles will hilite for the wrong notes, or not hilite at all when they should. I think it's disparity between when I initially build the tree and when I parse it incrementally later (two different functions)
  • My data structures do not represent all of the MIDI parameters I would like to use. Several parameters are fixed or arbitrarily randomized.

Sunday, September 29, 2013

[Aside] Sierpinski Strikes Again!

While working on my Digital Image homework last night for VIZA654, I inadvertently created a Sierpinski triangle in the green channel of my image.



I'm still pretty ecstatic, and I'm looking forward to figuring out exactly why the image was generated, if it works for all image resolutions and aspect ratios, and how I can better control it.

That's all for now!

Thursday, September 26, 2013

Life-based Systems Project Update 03

Progress feels good. I'm finally making sense of my components.
Also, the ofxMidi exampleOut project is very informative


  • The reason I was having problems integrating MIDI into my project (and considering using wav) is that the openFrameworks ProjectGenerator script does not successfully add the ofxMidi addon package to a new project (most likely the fault of the maintainer, not the ProjectGenerator).
  • The MIDI backend for windows is RtMidi when using the ofxMidi addon. This makes setting up and using JACK or another MIDI backend unnecessary, and makes my sketch much more portable.
  • Microsoft (at least my build) does have a default MIDI port that routes to the Microsoft GS Wavetable Synth. Fortunately, ofxMidi is able to map to this port by default.
    Not to mention, I don't need another program to also output the audio.
  • The example projects for ofxMidi now compile properly on my machine, significantly reducing the amount of code I need to figure out myself

More soon...

Wednesday, September 25, 2013

Life-based Systems Project Critique Day | Out Sick

Regrettably I was unable to attend class today. I came down sick in the afternoon.

Tuesday, September 24, 2013

Life-based Systems Project Update 02

Thoughts

  • Should I use branch depth or specific rules to build chords?

I  --> I IV V I
ii --> ii V I IV
IV --> IV V I ii
V  --> V I ii V

Which looks like this in code, btw
lsystem.addRule("1", "1451")
lsystem.addRule("2", "2514")
lsystem.addRule("4", "4512")
lsystem.addRule("5", "5125")


This would make the chords more portable to different keys, but might take too much control away from the system.

Monday, September 23, 2013

Life-based Systems Project Update 01


I have decided to use openFrameworks and the openFrameworksExtensions (ofx-) for my project. For those not familiar with openFrameworks, think of it as the C++ brother of Processing. It doesn't hide nearly as much from you, which presents a steeper learning curve for non-programmers but allows more control and efficiency as a tradeoff.

openFrameworks for Processing Users

Libraries

L-systems

I've been working on gathering together the libraries I will need to accomplish my goals

First I found a basic L-system library that I will be modeling my L-systems after. For the time being, it accomplishes the basics of what I need and has a simple draw class for debugging, which makes it convenient.

LSys/ofxLSys: https://github.com/daanvanhasselt/snippets/tree/master/LSystem











Audio Output

The idea is to eventually route MIDI output to a MIDI application where I can potentially run filters on the code. I will probably use Abelton for this work, but for the meantime I do not want that to slow down my development of the actual program

Ableton Extension: https://github.com/tassock/ofxAbleton
Generic MIDI Extension: https://github.com/danomatika/ofxMidi



In the meantime, I have alternatively downloaded some audio files of a sampled piano, modified, and exported them as wave from here:

University of Iowa Music Instrument Samples: http://theremin.music.uiowa.edu/MIS.html


Progress

I've got openFrameworks up and running (as seen in image above)
I figured out how to install addons
I've rounded up the initial addons I need for my base proposition, and started looking at addons for my stretch goals
I'm working on modifying the parser from LSys to play music back while it reads the l-system, but that is going to need to be modified further so that the sounds play back appropriately
I'm currently just working with a subset of the C-major scale (C, F, G, A, D) because it's easy to make something more coherent, and it limits my inputs and axioms.

Notes

  • I'm worried about the MIDI side of things. If the wav side is fast enough, I may just run with it for this project, though it wouldn't be ideal
  • I need to make a decision about what I want the visuals to be so I can begin developing them as well. I feel very behind right now because of the weekend (family came in to visit, birthday)
  • I'm worried about the length of the strings getting too long, so I need to do a better job of encoding my 

Life-based Systems Project Idea

Idea

Generate and play "music" by interpreting L-system nodes as notes and same-level branches as chords

Proposal

  • The backend will be an L-system interpreter
  • Input will be fixed based on my experimentation
  • The resulting L-System will be interpreted and played-back as music

Goals

  • Find or write a C++-based L-system interpreter
  • Experiment with L-systems that produce tree-like structures normally, but replace the F (forward) symbols with letters.
    • lowercase = transitions
    • uppercase = notes?
  • Modify the drawing l-system parser to play the l-system instead

Stretch Goals

  • Realtime playback of visuals in OpenFrameworks
  • MIDI Output / Streaming to 3rd party application
  • Randomization of input
  • Genetics and fitness to breed "better" L-systems for sound

Reference

http://www.tursiops.cc/fm/

Monday, September 9, 2013

Chance-based Systems Presentation

Presentation Critique


edit:

Based on feedback, I would like to add further explanation that I thought I had posted explicitly prior, but indeed, had forgotten. Screenshots and planned future work still follow.








Concept and Purpose

Microscopic Monumentality: 
One of my inspirations for this project was the feelings I experienced when I encountered these large hung canvases of chip art only to discover that they were taken of actual microchips. It was an interesting change of perspective that was provocative, at least personally. Part of this project was my hope to recreate this experience for others, but without access to a larger display that wouldn't also block my motion tracking, I wasn't able to achieve this.

Systematic Aesthetic
I'm rather fond of mechanical, rigid, rectangular and angular aesthetics that drew me to using chip art as the vehicle for this expression in the first place. My biggest qualm with myself during the project was that I could not further the aesthetics; I simply didn't start early enough to have the time, but it's on my roadmap.

Artistic Participation in the Technical
Lastly, an idea I've been exploring in a more literal fashion is how to allow those who are daunted by the technical to be able to participate and enjoy it in a way that bridges that fear gap. This is the hardest of my goals to implement for two reasons:
  1. The aesthetics have to be far more representational and evocative of something actually technical. If they don't look technical, why would anyone necessarily consider them technical?
  2. Because the project hinges on chance, there's a difficult balance in allowing for audience participation without direct influence. If the crowd drives the simulation knowingly, it loses much of its chance; if they do not feel any influence over the system at all, how do they actually experience it? How does one keep an audience interested enough to evoke wonder and yet perplexed enough that they never encroach actual understanding?

I look forward to the opportunity to resolve these issues in the future, perhaps for a gallery type installation.

end edit.



Screenshots:


Without Outlines:



Analogous Color and Hue rules:

The following were generated on successive runs of the system for approximately 1.5 minutes









Planned Future Work:


  • Work around opacity problems inherent in Processing 2.0 (right now)
    • Fix fading functions to quickly fade new items and outlines in and out
  • Parameterize color profiles instead of selecting random ranges for more interesting palettes
  • Add perlin noise textures to improve visual complexity and aesthetic appeal
  • Experiment with adding lines on randomly selected objects to move toward visual target
  • Make or find a back-lit projector so a monitor does not need to be used
  • Use depth-based tracking instead of image-based tracking
    • This might help the system react much more effectively in the presence of larger crowds, where it appeared to falter. I had designed the system to work well with casual viewers, but I didn't anticipate such a large group being so close and enthralled ;]
  • Make fullscreen
  • Create system to more quickly/easily test and simulate so I don't have to wait for it to fully run



Sunday, September 8, 2013

Chance-based Systems Project Update 05

Today, I found myself exceedingly frustrated with the flob library I'm using. Originally I chose it on the recommendation that it worked out of the box and would scale well to my project, but there's several weak points in its implementation, and the API is somewhat lacking in documentation. Reading the source helped, but not by much.

I was getting strange problems the longer my system ran, and I noticed it especially when there was little to no movement on the system. Turns out, when there's no movement, flob actually resets the id counter for all of its blobs, which I had been caching and using in a number of situations. Ultimately it was causing a lot of blobs to be selected but then ignored and never drawn.

Rather than try to figure out how to recompile the libraries without messing up the rest of the implementation, I wrapped the blobs in a wrapper class and stored my own id on each.

The system is finally stable, and I think most of the remaining tweaks will be for aesthetics.

Here's some shots from today in Studio A (my twin came along as well):








While I really wanted to use the projector I checked out, I remember my original reasoning for wanting to use a screen (besides being easier to test on and truer to color) is that if people are supposed walk in front of this, then it needs to be back projected or projected from the ceiling, and both of those sacrifice visual fidelity, time, and flexibility. It's still my intention to make the project work as a projected installation, but I really just need more time.


I'll finish cleaning up the aesthetics and post some screenshots on a final blog post for the presentation tomorrow. Until then!


Saturday, September 7, 2013

Chance-based Systems Project Update 04

Today is a smaller update, but the good news is COLOR!

I started looking through Processing's color utilities, which fortunately are much more user-friendly than the default Java ones. For now I'm just building up the system with the needed parameters so that tomorrow I can tweak the color ranges and palettes as finely as I choose.














A couple problems arising are that a lot of the blocks are being drawn over, so the data structures will need to be refactored to sortable ones. I would like to sort based on the visual "size" of the chips, and I think area will be enough. I could factor in the diagonals as well, but I don't think it will be necessary, and I'd rather not do more work if the aesthetic of the simpler solution works just as well.

Yesterday I checked out a projector and booked time in Studio A to get the feel of how the project might actually be set up as an installation, and tomorrow I'll hopefully be able to find a few people to help me test it in Studio A.

Friday, September 6, 2013

Chance-based Systems Project Update 03

Since Wednesday's critique I've been able to move a bit further. Currently, I'm tweaking the values for the tracked blogs and building the data structures I'll need to track and display them properly. It's taking me a little time to remember my Java, but it's not too bad so far.
















The major steps today were identifying all the debugging information I'd need and getting them reliably and correctly output onto a debugging screen. The red and light green boxes you see above are blobs that have been selected by my random/fitness check, and will be used for mapping onto the parameters of my chips.

Lastly, It took a bit to grok how to make a stopwatch/timer based on the frameRate, and adjusting to the little tricks Processing pulls behind the scenes was... fun. Now that I have it though, accomplishing animations and timeouts will be much easier.

Now I just need to start making designs!

Wednesday, September 4, 2013

Chance-based Systems Project Update 02

Things seem finally to be coming together

Preparing my thoughts for crit tomorrow:
  • The project is interactive (dynamic)
  • Audience will interact with it over time
  • Preferred audience is a crowd because it will enhance movement
  • Audience should understand they are effecting the system indirectly, but it should not be able to understand the underlying process directly (to keep things feeling random. They should feel like more of a trigger than an influence (even though they actually are)
  • Final images should be projected (and printed if static images are interesting, we'll see...)


idea

Using an implementation of blob detection, randomly select some blobs with movement above a threshold (crowd or individual movement, not noise). Draw selected blobs to screen with varying color based on any data available from blobs library (depends on blob implementation). Auxiliary shapes can be generated as necessary, will test.



























edit

The flob package was on of the ones Thomas mentioned awhile back. It has trackable blobs that give both lifetime and velocity of a blob, which were two important variables I was hoping for to map to size/color/opacity of my blocks. It also works in both oF and processing2+

Thomas also had a single image of the reference art from our trip to the post tower














Fortunately, that's one of the more representative images, and I can just make out the letters "ARITHM" which of course (as Laura was trying to remind me) means these images are from an exhibition in the Bonn Arithmeum. Searching there site was almost futile, but I found a few helpful things:

http://en.wikipedia.org/wiki/Very-large-scale_integration

       



Hopefully that will give a bit more direct insight into how to draw these objects effectively.
Also, flickr is helpful

http://www.flickr.com/photos/bibi/5939565186/sizes/z/in/photostream/
http://www.flickr.com/photos/bibi/5939007795/sizes/z/in/photostream/
http://www.flickr.com/photos/mitko/2210284854/sizes/z/in/photostream/

Tuesday, September 3, 2013

Chance-based Systems Project Update 01

Today has been a bit worrisome as far as project ideas go. I'm stuck in brainstorming world, but fortunately I've enough patient classmates to help me wade through my lack of ideas.

Today we saw critiques of three works, which all made me think that the weekend and my internal dialog have skewed my understanding of "chance-based" into "it must have a pseudo-random data source". Maybe I should work more at the lab...

Talks with classmates were helpful, so I think I can get my wheels turned back to the right direction before Wednesday. I'm going to look for my photos from Germany (where I saw the art that's my current, exclusive inspiration) so hopefully I can make some progress.


edit:

Good News: I found my images from the Deutsche Post Tower.
Bad News: I have one, not so good, skewed image of the art pieces I wanted for reference.

Maybe Thomas (Storey) took more than I did. It's a bit ironic, considering the lasting impression these things have had on me.