Project Lab Archive – Automated Rock Band

I’ve decided to start writing about some of my old, and possibly uncompleted, projects that were started back during my tenure at Texas Tech. Before I begin, I’d like to go over how the EE program is structured.

Tech has a very unique system where labs associated with traditional classes (circuits, electronics, electromagnetic theory, etc.) are eliminated and replaced instead by full credit lab courses which require dozens of hours of lab time and aim to force students to work on semester-long projects. There are five courses, or project labs, which every student must complete before receiving their degree. Each one becomes increasingly more difficult and is meant to encourage the student to solve problems and complete projects on a schedule. Think of each project lab as a senior design project every semester.

For my final project lab course (lab 5), I chose to build a project focusing on my video game interests, Rock Band. At the start of the semester, I outlined the vague objective of building a system to automatically play Rock Band. My initial system outline for the project looked similar to this:

block diagram

As the project progressed, I decided to add the additional requirement of keeping the entire system hidden from anyone I may be playing against. This meant that the video analysis board would be connected directly to the Xbox, interpret video signals, and generate digital outputs to control the guitar controller. Whenever a song was started, I would enable the externally driven inputs and “play” through the song automatically. Ideally, I would also lock out all the guitar inputs until the song completed. The system should also be able to play any song without scripting (telling the system to recreate a set of pulses for a certain song).

The video analysis and digital processing boards were eventually integrated into one circuit, but during development the two were kept separate. A block diagram of the board is shown below.video process diagramThis project required a good understanding of composite video signals. An NTSC composite video signal is comprised of three signals: Y, U and V. Y represents the luminance (brightness) of the picture as well as synchronizing pulses. If you were to view only this portion of the signal on a color TV, your image would appear black and white (monochrome). U and V represent hue and saturation, also known as chrominance. These signals carry color information, and when combined with luminance they produce a full color picture. The luminance signal varies between 0.5 V and 2.0 V where 0.5 V is black and 2.0 V is white. The chrominance signal is added by superimposing a 3.579545 MHz sine wave (yes, the spec is that exact frequency) onto the luminance signal, and color is indicated by its phase shift. Video is drawn progressively, top to bottom, every other line at a time. At any given moment either the even or odd lines (even or odd frame) will be drawn, but never both at the same time. Since this process happens fairly quickly, humans perceive this as a series of images. These images are then integrated by the human brain and become video.

rb_arrow.fw

Rock Band player interface

In order for the system to detect a note or “puck” (indicated with a red arrow in the figure above) it has to be able to lock onto a specific line of video. The system must know the beginning of a frame as well as the beginning of each line and be able to accurately keep track of the line count. This is achieved using an LM1881 IC. This chip is called a video sync separator because it extracts information from a composite video signal. The IC will output pulses when certain portions of the video signal are observed such as the beginning of a new video line, the beginning of a new frame, whether a frame is even or odd.  Using these pulses, maintaining synchronization with the video signal is more easily achieved, allowing us to analyze it with a simple microcontroller instead of processing the entire frame using much more powerful hardware. The image below shows the sync separator portion of the circuit.

lm1881_small

Sync separator circuit

During testing, I noticed that because of impedance mismatches and increased load on the composite output of the Xbox the luminance signal became distorted and required amplification. To compensate, I built a simple op-amp based, non-inverting, amplifier with adjustable gain. I used an AD811, which is a high performance video op-amp designed with enough bandwidth and speed to amplify composite video signals.

compamp_small

Video amplifier circuit

In order to remove the chrominance information from the video signal, a filter is needed. A perfect filter would only remove the 3.579545 MHz sine wave and leave the rest of the signal intact. There are two methods that are commonly used to achieve this, both of which were explored. A low pass filter with a corner frequency of 2 MHz is typically used because of its low cost and simplicity. However, this caused the luminance signal quality to degrade severely and made discerning the pucks much more difficult. The other method of removing the sine wave is to construct a notch filter centered at the frequency of interest. Using this technique (also known as the “trap filter separation” technique) much more of the signal is preserved, thus allowing for more precise identification of the pucks. The notch filter is based on a high-speed Twin-T design which utilizes an op-amp to compensate for the power lost in the filter. Adjusting the capacitors allow the center frequency and notch depth to be adjusted. An AD811 was also used for this stage of the video analysis board. The final schematic for the filter and its frequency response are shown below.

filter

Filter schematic

response

Filter response measured on a network analyzer

With the video processing completed, I moved on to the digital processing portion of the circuit. I chose to work with an MSP430 because of its built-in comparator, hardware interrupts, and availability (we had a bunch of them in the stock room). Initially, I interfaced the prototype circuit to a TI development board using pin headers and wires. The circuit was first pototyped using a style known as “sky wiring”. This method is commonly used for RF circuits and is similar to “deadbug” prototyping. ICs and components do not sit on a substrate and are instead suspended by their ground connections. Since I was working with video signals and making small tweaks to the circuit, I thought that using the sky wiring method would be easier to work with instead of working on a PCB substrate. Images of the prototype board can be found below.

DSC_6891_small

Video amplifier, notch filter, and sync separator ICs

 

I used variable capacitors to tune the filter for optimum performance. Even though I had access to a network analyzer, I recommend that if you decide to replicate this project, you should also use variable capacitors to tune the filter.

DSC_6908_small

Side view of the prototype video analysis board

Once I verified that everything worked properly, I designed and milled a circuit board to include all the video processing hardware and the microcontroller. A full schematic of the system along with the hardware layout can be downloaded using the links at the bottom of the page.

Once the video analysis hardware was complete, I worked on the code which would use the MSP430’s comparator to detect pucks on the screen. An external voltage reference was set using a simple voltage divider and fed to the MSP430. This allowed me to manually adjust the threshold as needed.

Two sets of interrupts were used to detect whether pucks were present in the video signal. The first interrupt would be triggered when the line I chose to monitor was being drawn. Once that interrupt fired, a second timer-based interrupt would begin. This timer would check the value of the comparator after a set amount of time and write it to a variable. The timer would then reset with a different delay and check for the next puck. Once all the pucks were checked, the proper key presses would be sent to the guitar controller along with a strum.

There were other methods of error checking which were not implemented such as comparing the next line of video to the previous, capturing several comparator readings and comparing them, and setting individual comparator thresholds for different puck colors.

Once the software created the correct outputs for the guitar controller, they were sent via an RJ45 cable and connector to a modified guitar. I replaced the debug port on the guitar to interface with my circuit and added an optoisolator/driver board so as to not modify the original operation of the guitar.

button_interface

Interfacing automated rock band to the guitar circuit

optoschem

Optoisolator/driver circuit

opto_board

Optoisolator/driver board installed in a Rock Band wireless guitar

rj45_board

RJ45 interface board

schematic

Video/digital board schematic (click to enlarge)

Layout (click to enlarge)

Layout (click to enlarge)

If you would like to download any of the design files, reference material, or schematics, click here. As with my other projects, I’m releasing this under the GPL license.

I will make the code available on github as soon as I clean it up a bit more!

Comments

comments