LindaLeen A Latvian-born-now-living-in-America Hollywood Diva wannabe named Linda Leen has recently been making publicity rounds in Latvia after having recorded a new album of songs here in Liepaja with the Liepaja Symphony Orchestra.

In a country of two and a half million people, it’s not that difficult to become a celebrity. She did an FHM photo shoot and appeared here on the Latvian version of “Dancing with the Stars” in 2008. One Latvian website even called her (ridiculously) the Latvian Mariah Carey.

She has already posted a track from the new album on her website, a song called “Pamosties,” which translates (ironically, as you shall see/hear) basically to “Awakening.” In an article in Diena, the main Latvian daily newspaper, which singles out “Pamosties” as one of the best new ballads of the year, Leen goes on to speak about the inspiration for writing the song, claiming that it was the poetry of Inga Gaile, how the song is really the anchoring point for the entire album, how it’s really about a child-like openness to love and life and blah, blah, blah.

Now, I’ve seen the CD, held it in my hands, and even seen some of the printed sheet music that was used by the orchestra musicians. Linda Leen’s name is consistently listed as the author of the music. The problem is, she isn’t. It’s clearly a rendition of Stevie Wonder’s beautiful 1972 ballad, “You and I,” from the album Talking Book. It’s not a note-for-note theft, as the song takes some turns that Stevie Wonder’s song doesn’t, and vice-versa. But the changes and the melody of the verses is clearly not Leen’s.

First the Stevie Wonder excerpt, the first verse:
You and I excerpt

Now the same first verse of Pamosties, which is the very same track on Linda Leen’s website, but who knows how long it will be up there, as I’m not the only one who noticed the theft: the comment board on Diena’s website contains several posts by other people calling her out:

Pamosties excerpt

Not to get all meta, but my first trip to Leen’s website made me laugh because her desperation for fame is at once so transparent and pathetic. I wonder what sort of psychological hoops one must jump through to conclude that you can steal from someone as well-known as Stevie Wonder and nobody will notice. While she’s already getting called out on the carpet for it here, on the margins of comment boards anyway. She’s due to sing a concert with the LSO here at the end of the month; it’ll be interesting to see if she can continue getting away with it that long.

Comments 4 Comments »

Noatikl

Continuing my look at generative music software, I’m now going to talk about a program developed by Pete and Tim Cole called Noatikl, from Intermorphic. Pronounced “noh – tickle”, the Intermorphic website describes Noatikl as “a powerful, easy to use generative music tool that helps you come up with new musical ideas – and your own generative music. It uses techniques developed over the last 17 years through our work on the award-winning Koan generative music system.”

Noatikl costs $99 (there’s a 30-day demo available). It’s available as a plugin (AU/VSTi/DXi) for various DAWs or as a standalone application. I used the standalone version.

Set-up is relatively painless, but following a continuing theme with third-party applications, setting it up to sync with Logic is not so simple, in part because the documentation is out of date. If you want to try it yourself with Logic, here’s what you need to do:


• Download the Logic 8 template here.
• Create TWO ports via IAC (if you don’t know what this is, see my earlier post here). For the sake of argument, let’s call them Port_Noatikl_1 and Port_Noatikl_2
• In Noatikl, set MIDI Output to Port_Noatikl_1 and the MIDI Input to Port_Noatikl_2, then check the Sync button. (The Listening button is used if you want to input MIDI into Noatikl in order to have a “Listening Voice” respond to that data with MIDI data of its own.)
• In Logic, under Settings>Synchronization>MIDI (tab), check the Transmit MIDI clock box, and at the pull down menu, select Port_Noatikl_2
• In Noatikl, in the Object/Parameter View column on the left, scroll down to Voice – Envelope – User Envelope 1 (Volume) and either uncheck the enable button or change the envelope settings (assuming you’ve already created some “Voices,” which I’ll get to in a minute). Its default position sets the respective channel faders in Logic to +6.0 dB, and chances are you don’t want that.

Now that the preliminaries are out of the way, let’s make some music! Well, not so fast there, cowboy. Via Intermorphic’s forum, youtube, or vimeo, and my own first attempts with it, I quickly (and maybe too quickly) came to the conclusion that the majority of the stuff being produced with this program falls under the category Ambient. Now, Ambient music has its place, I suppose, but it’s not what I’m interested in, and it seems at odds with how the program is described by Intermorphic. I can walk into any Sam Ash in New York and hear some doofus noodling with some synth pad while testing the newest keyboards and hear something no different from what people are taking pains to get this program to do. Harsh? Maybe.

Anyway, let’s dig in a bit. You have the option to control multiple global parameters: piece length, tempo, meter (*ahem*, not really – more later), scale, harmony. With the scales and harmony, you can ask the program to favor some intervals over others, in terms of percentages. So, for example, you can say a piece should have %35 major 7ths, and the program will try to respect that parameter as best it can, given the other parameters you set.

Individual parts also have assignable parameters, depending on the type of part you’ve created. Those parameters basically evoke strategies for how that part will produce its next pitch, how many pitches it will create simultaneously, what kinds of rhythms will be produced and how often, how long the phrases can be, the number of rests, etc.

Noatikl allows you to create several types of voices:
Rhythmic Voices are the default voice type. Notes have their durations composed according to the rhythm rules you define for your voice, to fit as well as possible within the bar structure of your piece.

Ambient Voices play notes irrespective of tempo or bar timings. Ambient Voices are used for creating drifting, floating sounds, drones or general texture.

Following Voices work in a call-response manner, following the behavior of other Voices according to rules you set.

Repeat Bar Voices are like Rhythmic Voices, that can be defined to repeat material that they have composed in previous bars.

Fixed Pattern Voices are Voices that play in accordance with various fixed MIDI patterns that you import into noatikl. These patterns are able to follow generative sequencing rules, and can adapt automatically according to properties you define. I did not try this one.

Listening Voices respond to incoming MIDI note events in definable ways.

Okay, so first, let me say that after digging through the Intermorphic users forum I did find a small sampling of pieces created with Noatikl that were truly interesting, though how much editing was involved after capturing the MIDI is hard to say. And I suppose that’s okay. Given the nature of this kind of software, to expect some kind of musical magic to happen by pressing a button is both unrealistic and perhaps happily so. It seems that in order to make real music with Noatikl still demands the ear and effort of a real, human composer. James Anthony Walker has posted a couple of intriguing examples here and here. Jovan Pesec posted a two-movement piece using Noatikl, the second of which I found charming. You can download an MP3 and PDF of the score here.

In terms of basic use, the Noatikl User Guide includes one tutorial that walks you through the creation of a slow (45 BPM), short, ambient soundscape, with a drone that has two following voices at a detuned unison and a perfect 5th, accompanied by a generated melody. The remaining tutorials involve getting Noatikl to work with various DAWs or ways of using Noatikl to generatively send MIDI CC (continuous controller) messages to your DAW. For example, you can use an LFO in Noatikl to send tempo change messages to Logic (MIDI CC22) once you’ve appropriately set up the cabling in Logic’s Environment to receive that message and send it to the Sequencer Input. I found myself wishing for more tutorials on music making.

By just tweaking the various percentages and rules, Noatikl frankly produces roughly 10 seconds of something lovely and 50 seconds of nonsense for every minute it runs. And no matter what meter I set, it pretty much stayed in 4/4. That said, it is possible to edit what comes out into something decent. Here’s an MP3 of 90 seconds that I turned around and tweaked for a couple of hours. I’m used a looped drum kit in Logic, combined two voices into one to make the piano part, editing out (or in) a great deal along the way, and composed a bass line to work beneath it. The result was something jazzy. But if there’s any sense of harmonic progression to it, it didn’t come from Noatikl: Noatikl Sample

Now, the thing is, if that’s all that Noatikl could do, it’d be easy enough to pack up, say no thank you, and go home. But everything I’ve described so far is like test driving a car. When you lift up the hood to check out the engine, Noatikl shows a lot more promise. There are two more aspects to Noatikl that potentially make it a powerful tool: Trigger Scripting using the Lua programming language and pattern editing.


I’ll start this section with some praise. A script submitted to the Intermorphic Forum by Chris Gibson based on Maz Kessler’s and Robby Kilgore’s Harmonic Rotation Toy, a MAX/MSP patch featured in this youtube video, worked very well, and was by far the most satisfying experience I’ve had with Noatikl so far.

The Lua script looks like this:

function nt_trigger_composed(noteon, channel, pitch, velocity)
— print (“Composed”, noteon, channel, pitch, velocity)
if (noteon == true)
then
local lCurrent= noatikl_Trigger_Parameter_Get(“Follow Shift/Interval”)
— print (“From lCurrent=”, lCurrent)
if (lCurrent == “3″)
then
lCurrent = “5″
elseif (lCurrent == “5″)
then
lCurrent = “2″
elseif (lCurrent == “2″)
then
lCurrent = “7″
else
lCurrent = “3″
end
noatikl_Trigger_Parameter_Set(“Follow Shift/Interval”, lCurrent)
— local lInterval2= noatikl_Trigger_Parameter_Get(“Follow Shift/Interval”)
— print (“To lInterval2=”, lInterval2)
end
end

The User’s Guide includes a short section on patterns, giving some sample patterns to insert in the Voice – Patterns view. The results were instantly more interesting than most of what the program generated on its own for me. Taken directly from the documentation:

Rhythm: <100 R 60 60 60.127 15>
Both: <100 B 60.15-30 1 60 2 60.127 3 15 7>
Forced: <100 F60 60.127 1 60 4 30 5 15.70-120 7>

In both cases, with patterns and scripting, I found myself wishing the documentation were more thorough in introducing us to this aspect of the program. There are examples of patterns and scripts to peruse, but they’re not broken down or organized in any kind of systematic way that I can discern. Leaving the user guide and turning to various online reference manuals on Lua itself (which most famously has been employed in gaming, like World of Warcraft), it’s not obvious how I might organize my time with it so I target only the aspects of Lua that are useful for driving Noatikl (rather than something for gaming). To touch on the meter question again, if you want Noatikl to give you something in 7/8, assigning the meter of 7/8 globally doesn’t seem to do anything 7/8-ish. Musicians know that 7/8 means that rhythmic and melodic patterns will be expressed together as 3+2+2 or 2+2+3 or 2+3+2. Noatikl really demands that that gets patterned or scripted in.

And while I’m talking about the documentation, let me just vent a little bit about a pet peeve. Exclamation points. I get turned off by any text that sprinkles exclamation points about like confetti. Usually parenting books or self-help books are the worst culprits. What are so many exclamation points doing in the documentation for a generative music software program?


So, to wrap up, I understand that an upgrade of Noatikl is in the works, though I have no idea what those changes will be. But while some obvious items to put on the wish-list would be the building in to the programs of scripting options that would be most musically useful, I would happily settle for a really careful, deliberate, systematic walk-through of the pattern and scripting options in the documentation.

Comments No Comments »

mca.jpg

I just received a recording from the Milwaukee Choral ArtistsFebruary 14 performance, directed by Sharon Hansen, of my El Paso de la Siguiriya, a flamenco-inflected setting for women’s voices of the poetry of Federico Garcia Lorca. Here’s an MP3: El Paso de la Siguiriya

I couldn’t have asked for a better soloist, in this case Rebecca Davies, who, to my ears, nailed the emotive power of singing in the flamenco style. Many thanks to Sharon Hansen!

Comments 1 Comment »

ElysiumScreenContinuing my look at generative music tools, here is Elysium, another freeware program, that like Tiction and Nodal, generates MIDI data via whatever MIDI device you have to an external synth, or via Apple’s IAC to a software synthesizer or to your DAW (Logic Studio 9 in my case).

This is relatively new software (all three of these programs are still toddlers, really), and thus has both compelling possibilities capable of rich reward and (like my own toddler) bouts of misbehavior and instances where it doesn’t do what you think it will do. Elysium (screenshot above) was visually inspired (according to Matt Mower, the software’s principal author) by Mark Burton’s multi-touch instrument (made in 2007) called the ReacTogon, which in turn has much in common conceptually with the ReacTable (first presented in 2005). Videos of both are embedded below.

While Tiction and Nodal both offer the option of sending individual nodes or groups of them to a particular MIDI channel, Elysium does this via Layers. In the screenshot above, you can see three layers. Here is an MP3 of an excerpt I created sending Elysium through Logic. Elysium Test Before I get to the specific pros and cons of Elysium, I want to talk in general terms about a couple of frustrations that have arisen, now that I’ve been playing around with this kind of software for the better part of a month. I offer these observations as grateful feedback to the software developers, and recognize that these are generous folks, really, and smarter than I.

My first gripe is maybe a little specific, actually. And maybe it has as much to do with Apple’s Logic as with anything else. From everything that I can tell by scouring the user boards, Logic 7 used to play a lot better with programs of this nature, in terms of syncing. The software program Noatikl, which I’ll talk about in a later post, has addressed it’s own problems with Logic using a complicated workaround, but no matter what I try, I can’t get the programs to sync together. If my musical needs, as it were, could be met exclusively within the standalone program and its sounding channel (i.e. that every musical element would be provided by wiring Elysium into Logic, recording the MIDI there and we’re done), there would be no problem. But if I want to add elements from outside the generative program that requires precise timing, like an Apple Loop (or a Dr. Rex loop in Reason 4, which I also tried), for example, then I’m in trouble. For the short MP3 excerpt above, I recorded the MIDI into Logic and then lined up the music’s beat 1 to an actual Logic bar’s beat 1. The music was set for 300 ticks per minute in Elysium, which would be 150 BPM, but because of latency and clock drift, wound up at 148.5 BPM once it was recorded and realigned in Logic. I added some drum loops as a fourth layer, and the alignment was okay. But with a longer stretch of music, that clock drift starts to become unworkable, an experience I had recently when trying to record approximately 7 minutes of material with Tiction and Logic. I haven’t tried it with Pro Tools (and can’t now, until Digidesign gets their Snow Leopard act together – I understand they’ve got a Beta version together now that’s 10.6.1 compatible, but I’ll wait), and it’s my feeling from the user forums I’ve seen that Ableton Live has fewer MIDI sync problems, but I don’t have Live.

The other general thing about Tiction, Nodal and Elysium is the sort of Perpetuum Mobile of it, which can get tiring but can be worked around with various degrees of success in each program, but also points to the utility of good syncing to enable post-recording editing. With Tiction you can simply stop or start any group of nodes at any time without effecting the playback of the other groups. With Nodal and Elysium the workarounds need to be more elaborate. With Nodal and with Elysium, you can set up elaborate timing schemes that effect the timing and probability of a nodal trigger. Setting up probability in Elysium is quite simple, actually. It’s simply one of the dials on offer in the edit menu of an appropriate nodal type.

Elysium could benefit from adapting a little of Tiction’s simplicity in one case here, though, as once you’ve established a pitch network that you like, save it and reopen it (or even stop it once), when you restart the piece, all the layers get triggered at once, offering no possibility of recreating the fun of the experience of building the piece’s density over time. In fact, I edited 2 of the 3 layers that Elysium created after the fact. In the case of Layer 3, which was sparse to begin with, I changed it so much it was hardly like the original at all. And 3 layers was all I dared create. At 300 ticks per minute, the CPU load on Logic was in the red for much of the time, and the timing between the layers regularly became unstable.

The pitch scheme of Elysium is set up following a pattern called a Harmonic Table, where every three adjacent pitches form a triad. There’s nothing particularly restricting about this, though it does mean that 3rds, 6ths and P4ths & 5ths are the only adjacent available intervals and in this program proximity has rhythmic implications that can not be gotten around in the same way that they can be with Nodal. One possibility that I didn’t much explore yet, is the option to play a triad instead of a single pitch. You can choose which combination of proximate neighbors will sound the triad.


There are several interesting features unique to Elysium. One is the Ghost Tone, in this case meaning an adjustable number of (rhythmic) repeats of the triggered pitch, with repeats from 1 to 16 times available. A truly intriguing feature to this program is the possibility of applying LFOs to many parameters: ghost tones, tempo, transposition, velocity, and more. Alas, I could not get the LFOs to work, and the documentation does not include anything about them. The probability feature works very well, though.

I’m guilty of misapplying a term. What is a node in Tiction and Nodal is a PLAYER in Elysium. And while the Perpetuum Mobile aspect is a strong character feature in Elysium, the variety of players makes it possible, to an ear that is willing to tune in to these kinds of changing landscapes and undulations, to inject a fair amount of surprise and change, regardless of how much one needs to swim upstream in order to make that happen.

Once the program is more stable, if I could put on a wish-list a small vanity item, it would be to make the program visually more compelling. How? By, for example, allowing the colors of the table to be customizable; allowing a certain amount of transparency/translucency to exist so that layers can be stacked and all the activity viewable from the top layer (like looking into a pool of dark water from above or something). I’m imagining a dark, translucent field, with 3D lights flashing in patterns in depth layers. But far more important than the cover of the book is its contents. I’ll be looking forward to seeing how this app develops over time.

Comments No Comments »

ActivityMonitorYesterday I upgraded the OS on my MBP to Snow (Slow) Leopard Mac OS X 10.6.1, and had a problem that seems to have been pervasive among early (earlier) upgraders: the system’s performance slowed to a pathetic crawl. I did the usual first steps: (using Disk Utility) repaired permissions and verified the hard disk and rebooted. Nothing. Reset the PRAM. No improvement. Fired up the Console and hunted down several files that were causing problems for various (and as it turns out, unrelated) reasons. One of those worth mentioning, however, is Pro Tools. I’ve got Pro Tools LE 7 and am not using it much any more, as I’ve opted for Logic Studio 9 as my DAW, but am still miffed to realize that Digidesign has no plans to provide a Snow Leopard update for LE 7, forcing all users to upgrade to Pro Tools 8. That’s a little sleazy.


Anyway, I fired up the Activity Monitor (found in ~user/Applications/Utilities) and saw that _coreaudiod was using from 2.5 to 3 GB of the 4 GB of available RAM (NOT normal), which explains the extremely sluggish behavior. (Here’s what my activity monitor looks like now.) Rather than bore you with the details of the dozen things I tried along the way to finding the solution (for me, anyway), I’ll get right to the two solutions that seem to have solved the problem for most people who had the same Core Audio issue (as I imagine many of you mac users here will also have).

Solution One. Manually delete or type into Terminal: sudo mv /System/Library/LaunchDaemons/com.apple.audio.coreaudiod.plist ~/

Solution Two. (Worked for me.) Delete the folder: System/Library/Preferences/Audio.

There were no side-effects, and the OS is indeed faster than before (or it seems that way, at least after several snail-hours).

Comments No Comments »

NodalScreenSnapz001Continuing my look at generative music software, here’s Nodal, also free (but not for much longer – it’s currently on a time-limit that expires on October 30, at which point an upgrade will be released, for a reasonable $25) that, like Tiction, generates MIDI pitches derived from user-placed (and defined) nodes. Where Tiction uses the distance between connected nodes (indicated by a 1, 2, 3, and so on, on the connector’s midpoint) to determine how many tics (of the MIDI clock) will pass between soundings of adjacent nodes, Nodal uses a graph to accomplish this.

Tiction allows you to determine the “physical” behavior (jiggle, attract, repel, plus another feature called “weight”) and pitch behavior of each node. The node can draw its pitch from a set of 16 pre-defined, user-selectable pitches, where it makes its selection according to where it’s drifting on the X/Y axis, OR you can assign that node a single, fixed pitch from inside or outside the set of 16.

With Nodal, on the other hand, the nodes are immobile. That doesn’t mean you are forced to live with the graph – I took the screenshot I took so you can see one of the examples that comes with the program (along with several other provocative examples), a beautiful miniature that is definitely free of the grid, called Nebula. Nested beside Nebula in the screenshot is the result of my following the tutorial. With Nodal, there’s more flexibility and possibility with pitch choices: each node can be assigned its own pitch list or something relational to another node using +/- n. Just for the record, here’s a short MP3 example of the result of the tutorial, not bad for 6 little nodes:
Nodal Tutorial

Despite their similarities, and while Tiction has the attractive feature of being hypnotically beautiful and is something that you can fire up and make music in minutes with ((my own technical synchronization/MIDI note-off problems with Tiction aside – I also couldn’t get Tiction to sync with Reason 4.0), I had no such problems with Nodal, as Nodal comes with a dedicated port built in, making (I think) IAC unnecessary). Nodal ultimately presents itself as something else, another way of composing. One of the downloadable PDFs at the Nodal website reads a little like a manifesto at times (but hey, then call me товарищ (comrade)):

“The goal of any Arti cial Life (AL) or generative composition system should be to off er possibilities and results unattainable with other methods. A number of authors have suggested that the emergence of novel and appropriate macro behaviours and phenomena – arising through the interaction of micro components speci fied in the system – is the key to achieving this goal. While simple emergence has been demonstrated in a number of AL systems, in the case of musical composition, many systems restrict the ability to control and direct the structure of the composition, conceding instead to the establishment of emergence as the primary goal.

This focus on emergence exclusively, while interesting in terms of the emergent phenomena themselves, has been at the expense of more useful software systems for composition itself. The aim of the work described in this paper is to design and build a generative composition tool that exhibits complex emergent behaviour, but at the same time offers the composer the ability to structure and control processes in a compositional sense. The idea being that the composer works intuitively in a synergetic relationship with the software, achieved through a unique visual mapping between process construction and compositional representation.

Comments No Comments »

One of the reasons you’ve been seeing posts from me lately about the graphic arts software programs Inkscape and Processing is because I’m in the planning stages of a multi-movement, electroacoustic, multi-media work that I will write for a flute quartet based in RÄ«ga (and possibly a second group in Göteborg). In any case, I chose as my inspirational starting point the subject of Emergence, the study of how complexity arises in various kinds of systems.

I’ve gotten a hold of various books on subtopics of the subject, such as Steven Johnson’s Mind Wide Open: Your Brain and the Neuroscience of Everyday Life, Scientific American’s collection of articles, Understanding Artificial Intelligence, James Surowiecki’s The Wisdom of Crowds, which I first heard about when listening to the podcast of one of my favorite radio programs, the one for WNYC’s Radiolab.

One of the movements I’m planning will involve projection of an animated, graphic ’score’ that will be realized/performed by the audience in real-time, accompanied by electronics and the flute quartet. I’ve put myself on the learning curves of both Inkscape and Processing in order to prepare those scores. I’ll talk about my plans for that in another post.

Along the lines of artificial intelligence, I thought I’d try to survey what’s happening with computer assisted (or generated) composition currently, whether algorithmic or not. If I could define the kind of activity going on in this regard right now, I’d break it down into two categories, each with sub-categories: those that require knowing or learning code (such as LISP, see for example, Peter Siebel’s Practical Common LISP, also available at Amazon) and those that are principally driven through a GUI (Graphical User Interface). The subcategories for each of those are FLOSS or FOSS (Free/Libre Open Source Software) vs. Commercial.

I want to talk about my experiences, early impressions, difficulties, or whatever else comes up:
1.) because it will help me process my own thoughts;
2.) if I overcome some technical hurdle (and boy, do they seem to have a way of persistently appearing) I might as well share my solution to save the next poor soul some time, and;
3.) to the extent that it’s offered, receive the wisdom and/or expertise of anyone who comes upon what I’m writing and wants to share.

tiction_topSo that brings me to Tiction, a quite beautiful, freeware “nodal music sequencer,” created by Hans Kuder with Processing. I downloaded the program, and followed the brief instructions at the website. Tiction doesn’t generate sound on its own, so needs to be connected to an external MIDI keyboard or an internal software synth.

There are basically three menus in Tiction:
1.) The Help menu, which is basically a list of keyboard shortcuts for setting up the nodal network, N to create a node, C to connect it to the next one, etc. It’s very straightforward.
2.) The Options menu, which allows you to choose 16 specific pitches according to their corresponding MIDI note number, with a default setting of a C major scale/diatonic collection, the MIDI In/Out connections, sync parameters, the ‘bar brightness’ and ‘do physical actions on trigger’
3.) The Edit menu (reached by selecting a node and typing E, which allows you to select specific parameters for the highlighted node, including MIDI channel, physical actions (such as jiggle, attract, repel), and velocity, among other things.

I first connected it to my external MIDI keyboard via my typical Core Audio MIDI Setup in Mac OS X, selecting it from the Options menu. I created several nodes, connected them, and fired it up. Right away, Tiction made some interesting music, with compelling visuals to go with it. The default behavior dictates that the network of nodes you’ve created drift around the screen, and depending where the network is drifting along the X/Y axis, it will affect the register that is sounded as well as affect the velocity. What that means is that the default mode is really rather musical. Set certain nodes to attract or repel, and the activity on the screen and the music generated become more agitated. Change the pitch collection and its potential broadens again.

I was so excited, I began thinking that it would be great to look into Screencasting software so that I could make a video of Tiction doing its thing and project it for the audience. I would record MIDI into say, three or four MIDI channels in Logic, add, edit, or modify material as I saw fit, and voilá! One movement done! Since there will be a choreographer and some dancers as part of the project, I thought this would make a perfect accompaniment.

Picture 1 I then wanted to try running Tiction through Apple’s Logic, and here I wound up hitting several hurdles, some that were solvable and some that I haven’t been able to yet. First, running Tiction into Logic requires using the IAC (Inter-Application Communication) Bus that comes by default with Audio MIDI Setup in OS X. At first it didn’t work. I tried it with Midipipe. Still no. Since Tiction was made with Processing and since Processing requires conversion into Java, AND since, evidently, there is some lack of support from Apple with Java, I thought the problem may reside within the Java extensions folder. Looking through the (not particularly current) message board at the Tiction website, I decided to by Mandolane MIDI SPI, thinking it was a long-shot, but since it was cheap, well, okay, and it was. A long shot, that is. Still no. But on the right track. Turns out the only extension necessary is: mmj (because since OS X 10.4.8 Apple no longer supports some java MIDI packages). Download mmj and copy both mmj.jar and libmmj.jnilib into /Library/Java/Extensions.

Finally! I get Logic and Tiction talking to each other. But another head (or 3) grew on the hydra:

1.) I can’t set the nodes to play on different MIDI channels. Whenever I hit “E” and edit the MIDI channel number, no matter what number I enter, it always resets itself to channel 1 as soon as I hit “E” again to exit the editor.
2.) I’m having the same “note off” issues that others reported in earlier versions of the software.
3.) I can record MIDI data into Logic from Tiction, but I can’t get their metronomes to sync up. If I select anything other than “Use Internal Clock” in Tiction, it refuses to play for me.

So, it’s not yet necessarily at the deal-breaker stage for me. Though it would be some work, I could still realign the MIDI data to proper bars and beats to deal with the sync issue. (I don’t know if there’s some clock drift over time or not that might make that more complicated than I think). I could re-orchestrate the MIDI data to whatever channels I want after the fact, though that would be time-consuming, and probably less organic than being able to do it directly from the original. I suppose I could make the MIDI ‘note off’ problem a feature rather than a problem, especially if I choose to involve the flute quartet in some interesting, crunchy way against the held tones. (I could also manually shorten other groups of notes that didn’t turn off.

I posted this issue on the Tiction website. If I get an answer that solves it, I’ll report back. Otherwise, anybody out there already run into and solve this problem?
*********
September 21, update: Problem #1 is solved, with help from Hans Kuder. When changing the MIDI channel in the individual node’s Edit menu, you must use the ENTER key for the change to take effect. The other half of the issue, on the Logic side, is that it is necessary to go to File>Project Settings>Recording and check “Auto Demix by Channel if Multitrack Recording.”
Note Off and Sync issues remain.

Comments 2 Comments »

I can’t figure out yet how to make this flying robot loop, or better, to let the viewer click the mouse or a key on the keyboard to restart his flight.

This one, where you move the mouse moves the robot, and also changes his eye and tele-belly color. Also, if you push a key on your keyboard (for me, I have to push it down and hold it down for a little bit), he has a message for you.

Comments No Comments »

So, moving on from my robot to mouse interactivity, I thought I’d post my riff on this example from the book I’m using, only to discover that loading Processing “sketches” into Wordpress blogs is a difficult affair. Luckily, as usual, one of my many betters has solved the problem and posted his solution. Morten Skogly’s answer to the problem is here. I chose some ugly colors for this, but anyway, if this works in your browser on your computer, I’d love to know about it. Thanks! Oh, and if it’s not obvious, drag your mouse across the green square…

Comments No Comments »

I recently ordered Daniel Shiffman’s Learning Processing: A Beginner’s Guide to Programming Images, Animation, and Interaction for my Kindle. Completely different from using Inkscape, Processing is code-based. One of the early projects in the book is to design a creature/character entirely of rectangles, lines, ellipses, etc., and to figure out how to write the code to bring that creature to life, so to speak. My robot is no match for the Borg or Cyberdyne’s T808 series, alas. The code involved seemed counterintuitive at first, but I got faster with it as time went on. Here’s the robot:
My Processing Robot

And here’s the code:

// My robot

size(400,400);
background(255);
smooth();
ellipseMode(CENTER);
rectMode(CENTER);

// Body
stroke(0);
fill(150);
rect(200,200,75,100);

// Head
fill(155);
rect(200,100,60,60);

// Neck
fill(160);
ellipse(200,140,25,20);

// Eyes
fill(255);
ellipse(185,100,16,25);
ellipse(215,100,16,25);
fill(0);
ellipse(185,105,10,12);
ellipse(215,105,10,12);

//Mouth
line(190,120,210,120);

// Antennae
stroke(0);
line(190,70,180,50);
line(220,50,210,70);

//TV box belly
fill(255);
rect(200,205,35,65);

//Feet
fill(160);
rect(180,263,25,25);
rect(220,263,25,25);
ellipse(180,288,21,21);
ellipse(220,288,21,21);
rect(180,312,25,25);
rect(220,312,25,25);
ellipse(178,330,40,15);
ellipse(222,330,40,15);

//Arms
fill(255);
ellipse(160,165,21,21);
ellipse(240,165,21,21);
fill(160);
rect(165,165,25,25);
rect(235,165,25,25);

Comments No Comments »