I'm glad to say that the plans seem in motion for our generative class to have an exhibition early next semester. In preparation, I will continue to post updates as I finish my piece and get it to the gallery level of polish
Currently, I'll be looking at some colorspace information that Phil provided me.
My HCL colorspace version seems to be broken right now, so I'll need to look into why its not working.
Saturday, December 7, 2013
Friday, December 6, 2013
Final Project Presentation
concept
Conceptually, the work "Playing with Chaos" explores visual representations of chaos for their aesthetic properties and structure. The work intends to inform its audience about the nature of chaos by portraying systems that expose the "unpredictable, but not quite random" nature of chaos
This work will (more than likely) be ultimately represented as either a real-time or static loop projected onto a gallery wall. Though touch/mobile aspects were previously explored (and the code-base for such still retained) no plans are in place for exhibiting the piece with these capabilities active.
aesthetic
A myriad of visual color palettes have been explored for the work, though currently, it appears that color modulation and a 1:1 mapping between the period of the chaotic input and the "period" of each box's hue within the HSV colorspace has produced the best compromise of visual interest without sacrificing or disingenuously representing the underlying chaotic system.
The LAB/HCL colorspace is also currently being explored to see the effects of holding the luminance, rather than "brightness" of each square more constant, though presentations of those results have been received less positively.
technical
The work utilizes a logistic map to generate a function which is chaotic across its entire period, an important matter when attempting to map directly from the chaotic value space to the visual color space. The sketch is developed in Processing with snippets of R used to generate some of the color palettes, and can run in real time.
The system can now also be pre-rolled to a certain iteration, which opens the possibility for more interesting tests and experimentations
future work (for the show!)
Now that the framework has been sufficiently established and can run and be captured as video stream, I intend to:
- test 2-dimensional chaotic systems
- continue to research alternative color palettes
- allow the system to loop back to its initial state
- explore realtime switching between any successful alternative visualizations
Monday, December 2, 2013
Final Project Update 09
Success!
I was finally able to implement the unified timer design. As a result, I can now make significantly smaller blocks. It also seems that previously, my code had actually been lagging a bit, as the new results are much snappier even when set to the same tick interval. Here's some screenshots for now, it may be awhile before I can get a screencast due to the memory increase this trade-off requires (fewer timers, many more blocks)
I was finally able to implement the unified timer design. As a result, I can now make significantly smaller blocks. It also seems that previously, my code had actually been lagging a bit, as the new results are much snappier even when set to the same tick interval. Here's some screenshots for now, it may be awhile before I can get a screencast due to the memory increase this trade-off requires (fewer timers, many more blocks)
Future Work
- Looping mode
- Alternative visualizations
- 2D chaos
- Seamless / arbitrary switching between multiple chaos visualizations
- Better video capture
Saturday, November 30, 2013
Final Project Update 08
Implementation is going alright, though I haven't had much time to work.
I've played with Android a bit more, but I just need to give it up. It seems that touch is not going to be a part of the project I present, though I'll leave the functionality in there for my friends or in case I find a different way to conceptualize it. Or for when I get bored.
The boxes resize to the grid properly, but I'm having trouble unifying the timers. I think I just need to step away for a bit and let my current work sit.
edit:
Whoops! Important stuff I almost forgot.
I found a MUCH easier way to deal with color spaces than what I've been doing.
It's probably not a good idea for me to use only one luminance anyway.
Full disclosure, I've never used R before, but it only took an hour for me to get up to speed with enough basics to write my small script and utilize the libraries. Sadly though...
I've played with Android a bit more, but I just need to give it up. It seems that touch is not going to be a part of the project I present, though I'll leave the functionality in there for my friends or in case I find a different way to conceptualize it. Or for when I get bored.
The boxes resize to the grid properly, but I'm having trouble unifying the timers. I think I just need to step away for a bit and let my current work sit.
edit:
Whoops! Important stuff I almost forgot.
I found a MUCH easier way to deal with color spaces than what I've been doing.
LAB/HCL color in R
Using the colorspace library in R, I am able to interactively generate color palettes and output them as lists of normalized sRGB values (perfect to convert with color()). I can define substeps for smoothing, coerce the colors to RGB, and have full control over ranges of luminance rather than just a single value.It's probably not a good idea for me to use only one luminance anyway.
Full disclosure, I've never used R before, but it only took an hour for me to get up to speed with enough basics to write my small script and utilize the libraries. Sadly though...
Disappointed
As excited as I got, I'm actually quite disappointed in the color results I got from the HCL space. They are still feel either too saturated or too desaturated and muddy when seen in the context of the grid. I think I'll be returning to my previous HSV solution. Oh well...
<img src="r_hcl.jpg">
Sunday, November 24, 2013
Final Project Update 07
Tonight I talked with my twin, Josh, about my frustrations with my project, and I think having a fresh mind who is familiar with aspects of the project helped quite a bit. If not, the ranting sure did. Josh is a PhD student in the Department of Computer Science and has more of a mathy relationship with generative systems from his studies in HCI and AI.
I'll try to just bullet point a summary quickly:
I'm not sure about this idea, but I think I'll roll it around in my head a bit more. Up to this point I'd assumed the experience to always be between one person and the screen. Maybe a more social piece would be better.
If I have time, I may very well implement this.
Attract mode might still be useful, and the ability to play both forward and in reverse would open up some options for looping the video if the touch aspect is fully removed.
I'll try to just bullet point a summary quickly:
Touch as a Medium
If I do insist on touch, can I utilize more from touch as a medium and use that? Maybe if people do draw out patterns that they then watch descend into chaos, could I find access to pressure sensitivity or use press duration to make the on-screen response more interesting or intuitive?
Perhaps when a user presses, color radiates outwards and slowly blends with any colors based on its distance away, instead of overwriting nearby colors.
A small OSD might also be helpful to explain controls or offer more options in a "touch mode"
If I want to test touch, just use my phone for now and get a tablet if it's worthwhile.
Dual screens, work with friend
Humoring the "game" idea, what if two people were meant to interact with this system and match each other's screen. Does that actually help?
I'm not sure about this idea, but I think I'll roll it around in my head a bit more. Up to this point I'd assumed the experience to always be between one person and the screen. Maybe a more social piece would be better.
Contextualize foreign concepts for audience
If I'm worried that I'll just end up telling everyone what they need to know about the piece, an alternative to text might be to contextualize the piece as a narrative story or something else relatable that could be more engaging. This might cause large changes in the project, but I'll keep it in mind
Since weather and stock markets show signs of chaos, perhaps look there for context. (Note to self: look up Condensation Cube, Haacke)
Also remember to keep maps simple, visual, and as 1:1 as possible.
Also remember to keep maps simple, visual, and as 1:1 as possible.
Branch project to keep minimalism
If I love the aesthetic so much of the project that I don't want to change a thing, I may need to branch off so as to preserve my original idea, but also create something worthwhile as a final project.
Focus on useful optimizations, scale
Thread calculations or unify the timers so that much smaller boxes can be shown. I don't want to get too small though, or then I might as well be directly manipulating the screen's pixel array (which I can do from 654, but it's more code to write). The scale of the piece may change the experience dramatically, so get to testing it at the proper scale as soon as possible.Include multiple alternative visualizations
One alternative we found interesting was having a single row centered vertically on the screen, and the grow the bar negative or positive (almost like an audio band spectrum) as the colors stray from their original value.If I have time, I may very well implement this.
Attract mode might still be useful, and the ability to play both forward and in reverse would open up some options for looping the video if the touch aspect is fully removed.
Improved fitting function for variable width screen
Try to write a fitting function so that the screen fits better (probably centered) whenever the requested box size, rows, or columns don't divide evenly.Wednesday, November 20, 2013
Final Project Update 06
After tonight's presentation, I have more critique to work on.
I feel like my project is a mess right now and its frustrating.
"I'm still not sure what your project wants to be"
"Oh good! I'm not the only one"
I think this is honestly more of my fault than the project's. I'm not allowing the project to expand, and I'm exploring its boundaries at all. I've gotten tunnel vision trying to force my project into the touch-based mediums. Hopefully I'll be able to be a bit more flexible, but I'd like to try just one or two more aspects with touch.
I feel like my project is a mess right now and its frustrating.
Critique and Feedback
1:1 mapping
My results turned out much differently when I started mapping 1D chaos data (logistic map) into a 2D field (RGB slice of LAB colorspace). Not only were my results not the visual improvement I had hoped for, I'm inclined to say they were "disingenuous" based on feedback I received in class. The suggestion was raised that if I intend to map to a 2D space that I use a 2D chaotic system; admittedly the only reason I tried it was because it hadn't crossed my mind, and the [0, 1) period of the logistic map was so convenient I was content and not searching for anything better suited.
Images as visual source?
Someone suggested if I was using 2D data that I allow arbitrary images to be used as the mapping. For example, someone takes a picture of himself and then the picture is reorganized chaoticly. It could have interesting results, but I think it points to less and less about the chaotic system, so I'm not sure if I like it all that much.
Still no clear concept when presenting
Summed up by:"I'm still not sure what your project wants to be"
"Oh good! I'm not the only one"
I think this is honestly more of my fault than the project's. I'm not allowing the project to expand, and I'm exploring its boundaries at all. I've gotten tunnel vision trying to force my project into the touch-based mediums. Hopefully I'll be able to be a bit more flexible, but I'd like to try just one or two more aspects with touch.
Monday, November 18, 2013
Final Project Update 05
A good point that Phil brought up came up again in my conversation today. I've really grown to liking the distinction between reactive and interactive installations, though I think it may be challenging to incorporate true reactive elements into my piece.
It's frustrating to know that my current project does not lend itself well to showing chaos as do similiar systems, such as two coupled pendulums released at the same time. I hope I am able to make the mapping in this project as obvious and simple as in the pendulum example.
Every idea I have though leads back more towards a game, towards mirroring screens, or perhaps towards replaying output several times over. The more I think about it, this is becoming two projects, and neither is interactive. One is reactive simply to allow users to play with the system (since people seem to enjoy it so much), and nothing more. The other doesn't even need to be real time or live; it can be a video of the wall of chaos, like what I currently have. I imagine my inclination will aim more and more in the direction of the latter, just because it's been so hard for me to part with the aesthetic and simplicity of the project in its current state.
This leaves mostly aesthetics to be improved, more on that later.
It's frustrating to know that my current project does not lend itself well to showing chaos as do similiar systems, such as two coupled pendulums released at the same time. I hope I am able to make the mapping in this project as obvious and simple as in the pendulum example.
Every idea I have though leads back more towards a game, towards mirroring screens, or perhaps towards replaying output several times over. The more I think about it, this is becoming two projects, and neither is interactive. One is reactive simply to allow users to play with the system (since people seem to enjoy it so much), and nothing more. The other doesn't even need to be real time or live; it can be a video of the wall of chaos, like what I currently have. I imagine my inclination will aim more and more in the direction of the latter, just because it's been so hard for me to part with the aesthetic and simplicity of the project in its current state.
This leaves mostly aesthetics to be improved, more on that later.
Sunday, November 17, 2013
Final Project Update 04
I've attempted to work on both the HSL improvement, as well as experiments to the chaos function
For critique tomorrow, I feel that the Logistic Map is equally as simplistic as the non-chaotic sine map, so it may come down to a question of what is more visually appealing
For critique tomorrow, I feel that the Logistic Map is equally as simplistic as the non-chaotic sine map, so it may come down to a question of what is more visually appealing
Color Luminance and HSL
http://www.thefullwiki.org/HSL_and_HSV (or just original Wikipedia version)
http://axonflux.com/handy-rgb-to-hsl-and-rgb-to-hsv-color-model-c
http://vis4.net/blog/posts/avoid-equidistant-hsv-colors/
http://vis4.net/blog/posts/avoid-equidistant-hsv-colors/
update:
Oh silly, silly me. It does me no good to try to translate HSL to HSV because that conversion doesn't include the hue in the equation!
Oh! But guess what? HSL isn't what I wanted at all in the first place!
I've been talking about luminance, and that would require lab colors and XYZ translations, which I'm capable of doing, but I don't know that are worth it. If I really want that, I should take a slice from a lab color histogram, save it as an image, and then sample the image instead. We'll see if that's worth it.
Wednesday, November 13, 2013
Final Project Update 03
Time got away from me this week, and though I was supposed to present for critique today, I wasn't able to. Feeling better though, so I should be prepared to show some sine progress and maybe the HSL fix for Monday.
Not sure about the interaction though still, it just keeps rolling around in my mind and not getting anywhere.
Not sure about the interaction though still, it just keeps rolling around in my mind and not getting anywhere.
Monday, November 11, 2013
CAE Class Notes
Birkhoff's Aesthetic Measure
Related degree of order to degree of complexitycomplexity: the degree of effort the brain has to make
order: the degree of effort "released"?
M = O / C
Valiant attempt, but of course, there are problems.
Could not compare things like circles
Could only compare within a certain type ("straight-edges")
Douglas Wilson (1939) showed scientifically that the measure isn't really objectively useful
"Orderliness vs Beauty"
Takeaway
Complexity and its relation to Order are keyNeurological basis for aesthetics ("pay attention to neurology")
Number Sequences
Pythagoras
Fibonacci Sequences
Golden RatioGustav Fechner (1860s)
Performed tests that said Golden Ratio rectangles were more "appealing"Later tests failed to confirm conclusively
Livio (2003)
Credibly debunked use of Golden Ratio throughout historyLe Corbusier used the Golden Ratio because he thought the "great mathematicians of the past" did
Zipf's Law
P_i ~= 1 / i^{a}
More likely, all over the place in nature
Voss and Clarke (1975)
Gestalt
Law of Praegnanz
Perceptual grouping
Grouping impacts balance
Datta et al. (2006, 2007)
56 features and ratings
Looking for patterns
15 key features
Found a reasonable correlation between their results and predictions
Friday, November 8, 2013
Final Project Update 02 (sick)
It's been a bit of a strained week, I've been sick since Tuesday-ish with a progressing sinus infection.
I've been able to make it to class, but I've yet to stay awake for a night's work.
I'm worried I'll be behind for the next while catching up, but I'm trying to keep up my research in the least.
I've been able to make it to class, but I've yet to stay awake for a night's work.
I'm worried I'll be behind for the next while catching up, but I'm trying to keep up my research in the least.
Tuesday, November 5, 2013
Final Project Update 01
Today I felt inspired to look up alternative methods for chaos. I found some interesting explanations in "The Computational Beauty of Nature", but also some accessible material in the following links:
I will experiment with these in the coming days, they seem promising
The only reason I used sine in the first place was because I wanted a function with that period. I had heard sine was used in some random functions, so I thought maybe I could make a random chaotic function from it.
The first link I posted with also the sine chaos also happened to link the logistic map article that I found independently.
Maximise this function: Find x that gives maximum point on y axis.
(Actual) Chaos Function Research
Chaotic and Fractal Dynamics: An Introduction for Applied Scientists and Engineers by Francis Moonupdate:
The logistic map looks incredibly promising, as it also has a period from 0.0 to 1.0.The only reason I used sine in the first place was because I wanted a function with that period. I had heard sine was used in some random functions, so I thought maybe I could make a random chaotic function from it.
The first link I posted with also the sine chaos also happened to link the logistic map article that I found independently.
Excerpts from above links
Two chaotic regions
sin ( (1/x) (1/(1-x)) )
sin ( (1/(x/100)) (1/(1-x)) )
One chaotic region, but not so simple period
(2*sin(3/x))+(3*cos(5/x))+(4*sin(6/x))+(1*cos(3/x)
Two chaotic regions, no simple period
sin(3/x)*sin(5/(1-x))
Maximise this function: Find x that gives maximum point on y axis.
Two chaotic regions, no simple period
sin(1/x) + ( 2 * sin(1/(1-x)) )
Maximise this function: Find x that gives maximum point on y axis.
Thursday, October 31, 2013
Final Project Meeting Report
Met with Phil today, and I believe the talk went well
I was between my LSystem project from the life project and the "Playing with Chaos" project I just finished for complexity-based.
With some reservations and concerns, I have tentatively decided to go forward with the complexity progress.
Per my Complexity Project Final Presentation post, I'll be touching on as many areas as I can under the "Future Work" heading.
For the final, projects should have strong Aesthetic, Technical, and Conceptual aspects. While all areas could certainly use improvements (and many are addressed by my Future Work roadmap), I'm most concerned about the conceptual.
My feedback from the project suggests that the interaction is still a bit tacked-on, so my biggest challenge will be researching alternatives for integrating user interaction, or really to consider more fully the role and necessity of the interaction for the piece and its purpose.
Full disclosure: I find this daunting, but I'm willing to try.
More than anything else, I want this project to allow its audience exploration and leave them preferably with a sense of wonderment and understanding of the nature of chaos. I still feel this can reasonably be achieved through user interaction, perhaps touch, but I'm just at a loss for how....
Nonetheless, I'm reassured that Phil considered the previous project a good first iteration to bring in for the final. More in the coming days.
Saturday, October 26, 2013
Complexity-based Systems Presentation
"Playing with Chaos"
Purpose & Concept
Playing with Chaos is an interactive touch-based application built in Processing that encourages interaction between observers and mathematical chaos through play. It aims to visually inform about the nature of chaotic systems and allow users to spawn their own instances of chaos within the system.
This project is intended to be installed on large touch screens monitors at least 5 to 6 feet wide, but preferably in a much larger array. The idea is that if only one user is present, she should be able to stretch and not reach the edges of the screen. Sweeping motions and play are encouraged.
Future Work
- Test on touch screen devices to work out any potential bugs
- Add ambient sounds to create a more immersive environment
- Explore alternative random algorithms that might better map to the color space with a more normal distribution
- Utilize the HSL color space so that the perceived brightness does not fluctuate without explicit instruction
- Improve grid code to fit better on a multitude of displays
- Unify timers (since all code is synced to the same tick) to save on processing and allow for larger grid arrays.
- Add modes for responsive interaction ("attract" mode, smooth, live "restart" transition)
- Explore additional gestures for user interaction
Wednesday, October 23, 2013
Complexity-based Systems Project Update 05
I am actually quite please about the progress I was able to make. The sketch is now touch enabled (though it's not like you could tell from looking at it). Using the SMT library was pretty painless, though they could have been MUCH more clear about NOT actually supporting Android just yet. There goes my easy way of testing it out.
No matter, the Mouse fallback will do just fine for now.
I rendered out the current piece without any interaction, though the framerate is a bit off because of how saveFrame works. If I have time I'll make a true render using a method James and I talked about last week.
No matter, the Mouse fallback will do just fine for now.
edit:
Here's an actual render of 45 seconds of the test seed. I found out by trial and error (and ok, some math) that my poor laptop can't hold much more in memory than that. In a perfect world, this would be as easy as just adding saveFrame to my draw method; in practice, that slows the sketch to about half speed. While the playback looks just fine because my clocks are reasonably framerate independent, the reality is that Processing undergoes a variable frame rate that is roughly half as slow but inconsistent, and the resulting videos are not terribly informative.Monday, October 21, 2013
Complexity-based Systems Project Update 04
Grid system is working now!
Just need to add a few more touches in the morning and set it to full screen.
Note: for some reason it was really difficult to get shots without cool colors, but there's plenty of frames that show the warmer and even some cooler parts of the spectrum. Perhaps I should later add a debug feature to allow you to scrub through to a to a certain iteration...
The main thing I see about this that is potentially problematic is that sine will oscillate and visually seems to favor the edges of the spectrum over the middle. The piece as a whole also seems to be either mostly warm or mostly cool at any given time. Given user interaction, this may be completely negated, but it's not the "true noise" I was originally trying for at any rate.
Also, the brightness of the piece seems to oscillate almost arbitrarily; I was hoping that using HSB would allow me to just change the hue independently?
Just need to add a few more touches in the morning and set it to full screen.
Note: for some reason it was really difficult to get shots without cool colors, but there's plenty of frames that show the warmer and even some cooler parts of the spectrum. Perhaps I should later add a debug feature to allow you to scrub through to a to a certain iteration...
The Sine of Chaos
The current randomization function is simple, perhaps even too simple. Here she is, in all her glory.
value = (0.5 * (1.0 + Math.sin(value * TWO_PI)))
The main thing I see about this that is potentially problematic is that sine will oscillate and visually seems to favor the edges of the spectrum over the middle. The piece as a whole also seems to be either mostly warm or mostly cool at any given time. Given user interaction, this may be completely negated, but it's not the "true noise" I was originally trying for at any rate.
Also, the brightness of the piece seems to oscillate almost arbitrarily; I was hoping that using HSB would allow me to just change the hue independently?
edit:
^ Minor brain lapse. If I wanted to have the same perceived brightness I should use a different color space that deals with luminance, such as HSL. I may backlog that for future work.
Sunday, October 20, 2013
Complexity-based Systems Project Update 03
Components
- Timers that can use the delta time of the application, frame-rate independent
- DuelingCalculator class/structure
- Seed
- Value
- Deterministic algorithm that can take the previous random value as input
- Mapping for value into color
- Grid structure based on desired box pixel width, dynamic scalable to display dimensions
- Touch interface (with Mouse compatibility for testing) that is grid-aware
After looking at both Processing and OpenFrameworks, I've decided to go with Processing for this piece.
Though I have more of this code already written in C++,
My brief survey of both languages seems to suggest that
- The workflow in Processing will be faster (for me) as the project will neither be memory intensive or complicated in design
- Processing has a well-vetted, long supported, straightforward library for Touch/Mouse interfacing called SMT
- Porting my code from C++ to Java will be relatively quick
While I prefer openFrameworks, there's a lot of extra plumbing there I haven't figured out, and to date, the ratio of projects I've done in Processing to oF is still 3:1. I don't have much time, to finish, so that's as good a reason as any for a presentable prototype
Progress
Ported my ofxDeltaTimer utility from C++ to Java
Set up "Dueling Calculator" system to generate sample values
Version control!
Sample value subset (showing 34 of 150):
Sample value subset (showing 34 of 150):
Initial: 0.5030686 Initial: 0.5030687 Initial: 0.5030688 Initial: 0.5030689 Initial: 0.503069 Initial: 0.5030691 Initial: 0.5030692 Initial: 0.5030693 Initial: 0.5030694 Initial: 0.5030695 Initial: 0.5030696 Initial: 0.5030697 Initial: 0.5030698 Initial: 0.5030699 Initial: 0.50307 Initial: 0.5030701 ... Initial: 0.5030817 Initial: 0.5030818 Initial: 0.5030819 Initial: 0.503082 Initial: 0.5030821 Initial: 0.5030822 Initial: 0.50308233 Initial: 0.5030824 Initial: 0.5030825 Initial: 0.50308263 Initial: 0.5030827 Initial: 0.5030828 Initial: 0.50308293 Initial: 0.503083 Initial: 0.5030831 Initial: 0.5030832 Initial: 0.5030833 Initial: 0.5030834 Initial: 0.5030835
Wednesday, October 16, 2013
Complexity-based Systems Project Update 02
Honestly I haven't been able to work much until tonight's class block, but I think I've gotten a bit more clarity over the past week.
Purpose
Gathering my thoughts from the original presentation, I thought about what attracted me to the idea and explore my original motives. From the beginning I've been very interested in this idea of empowerment and exploration, particularly teaching people to learn and explore complex phenomena that they believe or have been led to believe they aren't smart enough to understand. There's this stunning side of the natural world that's so difficult for many (myself included) to grasp until its visual, and I think that's what this project is really about.
It's about playing with chaos (oh the puns).
Play
One of the most epiphanic experiences of my undergraduate studies was the discourse Katie Salen and Eric Zimmerman present in Rules of Play about the subtle differences between "games" and "play".
I think that's why I was so reluctant to let this project become a game (besides the fact I don't like the idea of being the carny) is because it's not a game to me, it's so much less structured.
The user interaction felt tacked on because it was; the way I presented it, it wasn't the objective of the piece, much less a supporting element. But I think it should be the objective, it should be the primary interaction of my audience, first to look, then to play.
No gradual subdivision
In this context, I don't think the progressive subdivision makes sense. I don't think having only two blocks makes sense either. I think it needs a reasonably subdivided array of same-colored squares to start. Pop open MS Paint, there you go:
Over time, I imagine it would shift colors ever so slightly, maybe like this:
Gross, I know, I'll fix the colors later. Obviously each block would also be a solid color. Maybe I should have done this in Illustrator... this is faster.
From there, I'm not really sure what will happen. I think that's largely dependent on the chaotic algorithm I use, but the end result should be fairly noisy regardless, so I think that's kind of just what I find aesthetic.
Thursday, October 10, 2013
Complexity-based Systems Project Update 01
I've decided to go with the dueling calculators idea (honestly though, I'd pretty much decided it by the time I was presenting it). After the presentation feedback, I've got a few decisions to make.
Feedback:
Clear Purpose
Feedback made me realize I still need a bit stronger purpose and concept. I want user interaction, but why?My original idea suggested starting from one square and subdividing over time, but is this actually necessary? Is the only way to make the project more meaningful to make it an explicit game (something I was trying to avoid)?Feedback:
- Does the user interaction [allowing users to click on on rows to start them over] make sense in the context of the project as a whole? It feels tacked on.
- Maybe it's as simple as a game, though that makes you a bit like a carny, rigging the end result by the nature of the "game"
- What actually is the point of subdividing squares other than to show they've reached their tolerance? It's still not clear how, when, or why you would do that.
I've got a few things to think about, more next time
Wednesday, October 9, 2013
Complexity-based Systems Project Brainstorming
Thought dump
Interests:- An art installation that exhibits hysteresis; each user visits it in the discrete state the last user left it but no one user is aware of how it got there.
- The visual quality of strange attractors
- A project that does more than literally visualize some aspect of complexity
Ideas:
- A "tunnel fly-through" that is following the path of a strange attractor or doubled pendulum. (it'd be slow, don't want to make people nauseous)
The environment might also be driven by chaos, or by parametrizing whatever state space the attractors are in.
https://www.shadertoy.com/view/XdXGD4
- A "duelling calculators"-esque visualization. Example: two adjacent squares are separated by a solid line. The squares are arbitrarily assigned a seed value. This value is the same for both but could even vary by the slightest magnitude. These values may map to different attributes, say, the colour of the square.
As the program goes on, the value evolves chaotically. When the values diverge more than a set tolerance, subdivide the squares and pick one of the colours as the new seed. Repeat. Chaos ensues.
Eventually this would probably end up looking like noise, but phases and patterns might emerge
It might be cool to allow users to draw across to set rows and columns, or to touch individual squares sequentially to reset the colors.
Life-based Systems Project Update 05
Since I'd like to carry on this project for my final, I began looking for more references and previous work for exploration. Here's a short list from the massive amounts I've culled off Google Scholar searches.
More references
http://portal.ku.edu.tr/~megunal/articles/cellular.pdfWednesday, October 2, 2013
Life-base Systems Presentation
Note:
Unfortunately I've had to roll back my past two days worth of work on this project, so the presentation images are still from Monday. There was a lot of work that needed and still needs to be done to isolate all of the data structures I have, and I just kept running in to errors. I will continue to iron those out in the coming days.
Concept
Create a sound "garden" where plant-like structures are interpreted as musical notes and chords.
The installation takes up considerable gallery wall-space and is meant to feel immersive.
The sound played back by each of the "L-Sound-Systems" is panned relative to their position on screen
The structures are drawn in real time, and visualize leaf-like structures at the nodes of each note
Progress
A single structure can be drawn and shown at once. Only the pitch is currently driven directly by the L-system. The empty space should be filled by other structures.
Future Work
- Bug fixes
- Draw (i.e. reveal) plant structure in real time as it is "played"
- Allow for multiple plants to grow
- Match note groups from parser with node groupings in visualizer (incorrect green circles are currently drawn due to mismatch)
- Time MIDI events properly
- Draw radial sine "leaves" (or other visualization) instead of debugging circles
- Expand the L-system grammar and parameters to be more flexible and carry more information
- Add genetic wrapper to evolve variable input structures according to pleasing chord structures, note progressions, rhythms, etc.
- Explore audience interaction via keyboard, where users can input short sequences or entire pieces and watch them grow in to a forest.
alternatively: - Consider allowing input of midi files to be interpreted
- Consider audience interaction via touchscreen to allow users to draw out their own plants
Monday, September 30, 2013
Life-based Systems Project Update 04
AH! IT WORKS!
I finally have a working version that actually plays back music!
The green circles are temporary, and are meant just to point out which nodes on the tree correspond to which notes being played. In the future, I would like them to appear more as "leaves", and I'm thinking about playing with radiating sine waves that pulse out when the nodes play.
NOTE: In the non-debug mode, the text overlay is not present and the display is full screen.
This gets me closer, but I still have a long way to go.
I finally have a working version that actually plays back music!
The green circles are temporary, and are meant just to point out which nodes on the tree correspond to which notes being played. In the future, I would like them to appear more as "leaves", and I'm thinking about playing with radiating sine waves that pulse out when the nodes play.
NOTE: In the non-debug mode, the text overlay is not present and the display is full screen.
This gets me closer, but I still have a long way to go.
Issues
- The code is such that (because I was in a rush for critique) only one of these systems can exist at once. I very much want a whole garden of these, but I'm taking it a step at a time.
- The system is static; I want for the tree to draw itself as the notes play, but right now, the entire tree is displayed all at once.
- The input L-system is fixed, and requires changes in code to make new systems.
- Visibly, circles will hilite for the wrong notes, or not hilite at all when they should. I think it's disparity between when I initially build the tree and when I parse it incrementally later (two different functions)
- My data structures do not represent all of the MIDI parameters I would like to use. Several parameters are fixed or arbitrarily randomized.
Sunday, September 29, 2013
[Aside] Sierpinski Strikes Again!
While working on my Digital Image homework last night for VIZA654, I inadvertently created a Sierpinski triangle in the green channel of my image.
I'm still pretty ecstatic, and I'm looking forward to figuring out exactly why the image was generated, if it works for all image resolutions and aspect ratios, and how I can better control it.
That's all for now!
I'm still pretty ecstatic, and I'm looking forward to figuring out exactly why the image was generated, if it works for all image resolutions and aspect ratios, and how I can better control it.
That's all for now!
Thursday, September 26, 2013
Life-based Systems Project Update 03
Progress feels good. I'm finally making sense of my components.
Also, the ofxMidi exampleOut project is very informative
Also, the ofxMidi exampleOut project is very informative
- The reason I was having problems integrating MIDI into my project (and considering using wav) is that the openFrameworks ProjectGenerator script does not successfully add the ofxMidi addon package to a new project (most likely the fault of the maintainer, not the ProjectGenerator).
- The MIDI backend for windows is RtMidi when using the ofxMidi addon. This makes setting up and using JACK or another MIDI backend unnecessary, and makes my sketch much more portable.
- Microsoft (at least my build) does have a default MIDI port that routes to the Microsoft GS Wavetable Synth. Fortunately, ofxMidi is able to map to this port by default.
Not to mention, I don't need another program to also output the audio. - The example projects for ofxMidi now compile properly on my machine, significantly reducing the amount of code I need to figure out myself
More soon...
Wednesday, September 25, 2013
Life-based Systems Project Critique Day | Out Sick
Regrettably I was unable to attend class today. I came down sick in the afternoon.
Tuesday, September 24, 2013
Life-based Systems Project Update 02
Thoughts
- Should I use branch depth or specific rules to build chords?
I --> I IV V I ii --> ii V I IV IV --> IV V I ii V --> V I ii V
Which looks like this in code, btw
lsystem.addRule("1", "1451")
lsystem.addRule("2", "2514")
lsystem.addRule("4", "4512")
lsystem.addRule("5", "5125")
This would make the chords more portable to different keys, but might take too much control away from the system.
Monday, September 23, 2013
Life-based Systems Project Update 01
I have decided to use openFrameworks and the openFrameworksExtensions (ofx-) for my project. For those not familiar with openFrameworks, think of it as the C++ brother of Processing. It doesn't hide nearly as much from you, which presents a steeper learning curve for non-programmers but allows more control and efficiency as a tradeoff.
openFrameworks for Processing Users
Libraries
L-systems
I've been working on gathering together the libraries I will need to accomplish my goalsFirst I found a basic L-system library that I will be modeling my L-systems after. For the time being, it accomplishes the basics of what I need and has a simple draw class for debugging, which makes it convenient.
LSys/ofxLSys: https://github.com/daanvanhasselt/snippets/tree/master/LSystem
Audio Output
The idea is to eventually route MIDI output to a MIDI application where I can potentially run filters on the code. I will probably use Abelton for this work, but for the meantime I do not want that to slow down my development of the actual programAbleton Extension: https://github.com/tassock/ofxAbleton
Generic MIDI Extension: https://github.com/danomatika/ofxMidi
In the meantime, I have alternatively downloaded some audio files of a sampled piano, modified, and exported them as wave from here:
University of Iowa Music Instrument Samples: http://theremin.music.uiowa.edu/MIS.html
Progress
I've got openFrameworks up and running (as seen in image above)
I figured out how to install addons
I've rounded up the initial addons I need for my base proposition, and started looking at addons for my stretch goals
I'm working on modifying the parser from LSys to play music back while it reads the l-system, but that is going to need to be modified further so that the sounds play back appropriately
I'm currently just working with a subset of the C-major scale (C, F, G, A, D) because it's easy to make something more coherent, and it limits my inputs and axioms.
Notes
- I'm worried about the MIDI side of things. If the wav side is fast enough, I may just run with it for this project, though it wouldn't be ideal
- I need to make a decision about what I want the visuals to be so I can begin developing them as well. I feel very behind right now because of the weekend (family came in to visit, birthday)
- I'm worried about the length of the strings getting too long, so I need to do a better job of encoding my
Life-based Systems Project Idea
Idea
Generate and play "music" by interpreting L-system nodes as notes and same-level branches as chordsProposal
- The backend will be an L-system interpreter
- Input will be fixed based on my experimentation
- The resulting L-System will be interpreted and played-back as music
Goals
- Find or write a C++-based L-system interpreter
- Experiment with L-systems that produce tree-like structures normally, but replace the F (forward) symbols with letters.
- lowercase = transitions
- uppercase = notes?
- Modify the drawing l-system parser to play the l-system instead
Stretch Goals
- Realtime playback of visuals in OpenFrameworks
- MIDI Output / Streaming to 3rd party application
- Randomization of input
- Genetics and fitness to breed "better" L-systems for sound
Reference
http://www.tursiops.cc/fm/Monday, September 9, 2013
Chance-based Systems Presentation
Presentation Critique
edit:
Based on feedback, I would like to add further explanation that I thought I had posted explicitly prior, but indeed, had forgotten. Screenshots and planned future work still follow.
Concept and Purpose
Microscopic Monumentality:
One of my inspirations for this project was the feelings I experienced when I encountered these large hung canvases of chip art only to discover that they were taken of actual microchips. It was an interesting change of perspective that was provocative, at least personally. Part of this project was my hope to recreate this experience for others, but without access to a larger display that wouldn't also block my motion tracking, I wasn't able to achieve this.
Systematic Aesthetic
I'm rather fond of mechanical, rigid, rectangular and angular aesthetics that drew me to using chip art as the vehicle for this expression in the first place. My biggest qualm with myself during the project was that I could not further the aesthetics; I simply didn't start early enough to have the time, but it's on my roadmap.
Artistic Participation in the Technical
Lastly, an idea I've been exploring in a more literal fashion is how to allow those who are daunted by the technical to be able to participate and enjoy it in a way that bridges that fear gap. This is the hardest of my goals to implement for two reasons:
- The aesthetics have to be far more representational and evocative of something actually technical. If they don't look technical, why would anyone necessarily consider them technical?
- Because the project hinges on chance, there's a difficult balance in allowing for audience participation without direct influence. If the crowd drives the simulation knowingly, it loses much of its chance; if they do not feel any influence over the system at all, how do they actually experience it? How does one keep an audience interested enough to evoke wonder and yet perplexed enough that they never encroach actual understanding?
I look forward to the opportunity to resolve these issues in the future, perhaps for a gallery type installation.
end edit.
Screenshots:
Without Outlines:
Analogous Color and Hue rules:
The following were generated on successive runs of the system for approximately 1.5 minutesPlanned Future Work:
- Work around opacity problems inherent in Processing 2.0 (right now)
- Fix fading functions to quickly fade new items and outlines in and out
- Parameterize color profiles instead of selecting random ranges for more interesting palettes
- Add perlin noise textures to improve visual complexity and aesthetic appeal
- Experiment with adding lines on randomly selected objects to move toward visual target
- Make or find a back-lit projector so a monitor does not need to be used
- Use depth-based tracking instead of image-based tracking
- This might help the system react much more effectively in the presence of larger crowds, where it appeared to falter. I had designed the system to work well with casual viewers, but I didn't anticipate such a large group being so close and enthralled ;]
- Make fullscreen
- Create system to more quickly/easily test and simulate so I don't have to wait for it to fully run
Sunday, September 8, 2013
Chance-based Systems Project Update 05
Today, I found myself exceedingly frustrated with the flob library I'm using. Originally I chose it on the recommendation that it worked out of the box and would scale well to my project, but there's several weak points in its implementation, and the API is somewhat lacking in documentation. Reading the source helped, but not by much.
I was getting strange problems the longer my system ran, and I noticed it especially when there was little to no movement on the system. Turns out, when there's no movement, flob actually resets the id counter for all of its blobs, which I had been caching and using in a number of situations. Ultimately it was causing a lot of blobs to be selected but then ignored and never drawn.
Rather than try to figure out how to recompile the libraries without messing up the rest of the implementation, I wrapped the blobs in a wrapper class and stored my own id on each.
The system is finally stable, and I think most of the remaining tweaks will be for aesthetics.
Here's some shots from today in Studio A (my twin came along as well):



While I really wanted to use the projector I checked out, I remember my original reasoning for wanting to use a screen (besides being easier to test on and truer to color) is that if people are supposed walk in front of this, then it needs to be back projected or projected from the ceiling, and both of those sacrifice visual fidelity, time, and flexibility. It's still my intention to make the project work as a projected installation, but I really just need more time.
I'll finish cleaning up the aesthetics and post some screenshots on a final blog post for the presentation tomorrow. Until then!
I was getting strange problems the longer my system ran, and I noticed it especially when there was little to no movement on the system. Turns out, when there's no movement, flob actually resets the id counter for all of its blobs, which I had been caching and using in a number of situations. Ultimately it was causing a lot of blobs to be selected but then ignored and never drawn.
Rather than try to figure out how to recompile the libraries without messing up the rest of the implementation, I wrapped the blobs in a wrapper class and stored my own id on each.
The system is finally stable, and I think most of the remaining tweaks will be for aesthetics.
Here's some shots from today in Studio A (my twin came along as well):



While I really wanted to use the projector I checked out, I remember my original reasoning for wanting to use a screen (besides being easier to test on and truer to color) is that if people are supposed walk in front of this, then it needs to be back projected or projected from the ceiling, and both of those sacrifice visual fidelity, time, and flexibility. It's still my intention to make the project work as a projected installation, but I really just need more time.
I'll finish cleaning up the aesthetics and post some screenshots on a final blog post for the presentation tomorrow. Until then!
Saturday, September 7, 2013
Chance-based Systems Project Update 04
Today is a smaller update, but the good news is COLOR!
I started looking through Processing's color utilities, which fortunately are much more user-friendly than the default Java ones. For now I'm just building up the system with the needed parameters so that tomorrow I can tweak the color ranges and palettes as finely as I choose.
A couple problems arising are that a lot of the blocks are being drawn over, so the data structures will need to be refactored to sortable ones. I would like to sort based on the visual "size" of the chips, and I think area will be enough. I could factor in the diagonals as well, but I don't think it will be necessary, and I'd rather not do more work if the aesthetic of the simpler solution works just as well.
Yesterday I checked out a projector and booked time in Studio A to get the feel of how the project might actually be set up as an installation, and tomorrow I'll hopefully be able to find a few people to help me test it in Studio A.
I started looking through Processing's color utilities, which fortunately are much more user-friendly than the default Java ones. For now I'm just building up the system with the needed parameters so that tomorrow I can tweak the color ranges and palettes as finely as I choose.
A couple problems arising are that a lot of the blocks are being drawn over, so the data structures will need to be refactored to sortable ones. I would like to sort based on the visual "size" of the chips, and I think area will be enough. I could factor in the diagonals as well, but I don't think it will be necessary, and I'd rather not do more work if the aesthetic of the simpler solution works just as well.
Yesterday I checked out a projector and booked time in Studio A to get the feel of how the project might actually be set up as an installation, and tomorrow I'll hopefully be able to find a few people to help me test it in Studio A.
Friday, September 6, 2013
Chance-based Systems Project Update 03
Since Wednesday's critique I've been able to move a bit further. Currently, I'm tweaking the values for the tracked blogs and building the data structures I'll need to track and display them properly. It's taking me a little time to remember my Java, but it's not too bad so far.
The major steps today were identifying all the debugging information I'd need and getting them reliably and correctly output onto a debugging screen. The red and light green boxes you see above are blobs that have been selected by my random/fitness check, and will be used for mapping onto the parameters of my chips.
Lastly, It took a bit to grok how to make a stopwatch/timer based on the frameRate, and adjusting to the little tricks Processing pulls behind the scenes was... fun. Now that I have it though, accomplishing animations and timeouts will be much easier.
Now I just need to start making designs!
The major steps today were identifying all the debugging information I'd need and getting them reliably and correctly output onto a debugging screen. The red and light green boxes you see above are blobs that have been selected by my random/fitness check, and will be used for mapping onto the parameters of my chips.
Lastly, It took a bit to grok how to make a stopwatch/timer based on the frameRate, and adjusting to the little tricks Processing pulls behind the scenes was... fun. Now that I have it though, accomplishing animations and timeouts will be much easier.
Now I just need to start making designs!
Subscribe to:
Comments (Atom)




























