/tinyletter

The Programs of the Week @sonic_sketches Launched

This Week’s Program: Aug 1 - Aug 5

Follow @sonic_sketches on Twitter!

This week, I put sonic-sketches in production. For you new subscribers or infrequent readers, sonic-sketches has been a long-running exercise for me to learn about a bunch of different technologies. Its final form is this Twitter bot. Every morning, @sonic_sketches will tweet a procedurally generated song using NYC weather data as input.

It feels great to ship this thing.

454399f0cf42c92b46e79be0c81bd25ab667f147

Here, I pull out the latitude and longitude from the song metadata and, using Twitter4J, start with a basic tweet stating the day of the week.

782deb62bd4ac5df678fd73342b8b94843133ece

Next, I use Twitter4J’s setMedia to upload the song, after it’s been converted from a wav to an mp4 by shelling out to ffmpeg.

This doesn’t work, I get a 400 back from Twitter’s API. I think maybe my mp4 video needs to have some kind of imagery associated with it, and a very quick Google search shows me how to have FFmpeg convert the audio data to a video waveform. That’s a pretty cool added bonus. I find FFmpeg completely inscrutable. This still doesn’t work.

Some more Googling reveals that Twitter4J does not support video uploading. For uploading video, the Twitter API only supports this through a relatively newer chunked media/upload endpoint. Looks like I’ll have to get closer to the metal.

In the next commit, I pull in the weavejester/environ library to make wrangling my Twitter API tokens a bit easier to manage.

909641ec8e7533dc3288b317cfdb7bf0ba176b80

My new function upload-media uses clj-oauth to sign the sequential POST requests needed to upload chunked media. The INIT command tells Twitter how large my media will be and what type of media it is. The APPEND command actually uploads chunks of the video and FINALIZE signals that the uploading is complete. I get a media id back and use Twitter4J to associate the media id with the tweet. I can now upload my programatically generated music video to Twitter.

93efb862d94c2a84cf91574c15a06f01828e75ae

I pull the concern of rendering a string with the phases of the moon emoji out into a new namespace. I want my Tweet to display emoji that represents some of the song input, and it makes sense to move those concerns into a shared library.

00dce8660cb29c5ac80d2a7d8f1504844fd5f01f

I do a bit of work to randomly pick the text for the Tweet along with that emoji namespace.

8817379665b680af5413cb3b853402c6f5f48270

Apparently, Ubuntu Precise doesn’t actually install the real ffmpeg when you apt-get install it. I instruct my Cloud Formation stack to install a static build that supports the features I want. I’m always looking for an opportunity to pipe wget into tar.

I update the crontab to also make sure that this ffmpeg is on the PATH.

73c3f1479d15e08e3224488bd266da7a1d8004df

I have the Twitter API tokens be parameters to Cloud Formation, and use those in instance provisioning to write out the Twitter4J system properties to /etc/leiningen/profiles.clj.

b6ce03565643396c5edd8029130424f4d9c2a9ea

After the song has completed uploading to S3, it gets tweeted. @sonic_sketches is now live and tweets its first song on Wednesday.

51674a0f55116afd6bb985fd0cf1dd25a6bd00a1

I use core.async to upload to S3 and Twitter concurrently, and await the completion of both.

2a563dca292c0bfd9d10f78522f2eabf32f71b71

Then, I start to dig a bit deeper into FFmpeg. I never expected sonic-sketches to become a visual experience, but learning more about FFmpeg filtering led me to a bunch of cool discoveries. I change the color of the wavform. I bump the resolution up.

I use the same RNG seed that generated the song to run Conway’s Game of Life in the background of the waveform. I do that because apparently that’s something that FFmpeg can do just out of the box. Why wouldn’t I do that? FFmpeg is amazing.

Happy Friday!

Next week, I’ll work on some logging and a bit more documentation and do a full retrospective of the sonic-sketches project.

⛱ Mark