Archive for the ‘Video’ Category

MSP430 TV output update

Just a quick update. With the new MSP430G’s and their bigger flash space, I was able to increase the resolution of my TV output program. The resolution is now 192×240. When the new 16K MSP430G’s arrive, I plan on increasing this to 384×240, which will finally approach the native aspect ratio.

I just had to change a couple lines of code to get this going, and include the larger image file. I used an MSP430G2452 with 8K flash.

NOTE: This should be compiled in CCS under debug mode. Using release mode or different compilers will likely require adjustment of the software delay toward the bottom of the code.

Here’s the code.

The example images seen below:
miss_nature_close.h
miss_nature_wide.h

If you want to make your own images to use, first create a 192×240 monochrome bitmap. To convert the bitmap into a header.h file, I used a program called Image2Code from CrystalFontz. Use this setting to get the proper image format:

TV Output Prototype


It still needs a battery holder. I might just buy one.

Launchpad monochrome NTSC composite video


Source code.

EDIT: I have updated the TV Output program to work with the newer 8K MSP430 Value Line chips. The resolution is much higher now.

EDIT: Now that we know how to flash DCO calibrations to MSP430G, this works without an external oscillator. That brings the external component list down to 2 resistors. Nice! Here’s the code to use if you have your constants calibrated.

I finally had time to get TV output working on my Launchpad over Thanksgiving break. I have a lot of half-finished projects sitting around, but this one has really been bugging me. This was sort of a self-enrichment project, so all the code is my own, however I got several ideas from the write-ups of other projects.

Right now it only displays one 1 image, stored in flash. If you want to change the image, you have to reprogram the chip. The image resolution is 192×40. I know that isn’t an ideal aspect ratio for televisions, but I was somewhat limited in my choices. I will explain shortly.

If you want to brush up on your composite video, I used http://www.batsocks.co.uk/readme/video_timing.htm as a reference throughout the project. The page is for PAL, but they give NTSC info as well. The general format is the same between NTSC and PAL, but the timings are just a little different. I’m using the “fake progressive” trick on that page, adapted for NTSC. I don’t want to get too far into the format, as it would probably double the size of this post. You probably only need a basic understanding of hSync, vSync and how a scanline is drawn.

The microcontroller must do three things with correct timing in order to output composite video:

  1. Output vSync signals at the beginning of every frame.
  2. After vSync, output hSync signals at the beginning of every scanline
  3. Output video data after hSync on every visible scanline.

The real trick was figuring out the right way to use hardware to accomplish as much of this as possible. Using hardware clears up flash memory to fit the image, and allows me to get the really important hSync timings right.

First, let’s look at vSync. vSync pulse timing accuracy doesn’t seem to be terribly important, at least on the TV I’m using. For that reason, I have vSync implemented in software. I should note that the constant TICKS_HSYNC here has nothing to do with hSync. It just happens that the duration of hSync is the same as the duration of a pulse used in vSync. Sorry if it’s confusing.


// vsync broad sync pulse section
void vSyncTripleBroad(){
  int count;
  for(count = 0; count < 3; count++){
    while(TAR < TICKS_HALF_SCANLINE - TICKS_HSYNC){}
    P1OUT = SYNC;
    while(TAR < TICKS_HALF_SCANLINE){}
    P1OUT = 0;
    while(TAR < TICKS_SCANLINE - TICKS_HSYNC){}
     P1OUT = SYNC;
                while(TAR > TICKS_SCANLINE - TICKS_HSYNC){}
    P1OUT = 0;
  }
}

// vsync short sync pulse section
void vSyncTripleShort(){
  int count;
  for (count = 0; count < 3; count++){
    while(TAR < TICKS_SHORT_SYNC){}
    P1OUT = SYNC;
    while(TAR < TICKS_HALF_SCANLINE){}
    P1OUT = 0;
    while(TAR < TICKS_HALF_SCANLINE + TICKS_SHORT_SYNC){}
     P1OUT = SYNC;
     while(TAR > TICKS_HALF_SCANLINE + TICKS_SHORT_SYNC){}
    P1OUT = 0;
  }
}

These functions just toggle the sync pin at the right times to produce the vSync signal.

// software vSync
if(scanline == 0){
  P1OUT = 0;
  P1SEL = 0;    // let software control P1.5 instead of TACCR0
  vSyncTripleShort();
  vSyncTripleBroad();
  vSyncTripleShort();
  scanline = 8;    // vSync takes several scanlines
  P1SEL = SYNC;    // let TimerA handle hSync
}

At the beginning of the first scanline, control of the sync pin is handed over to software. We then use the above functions to produce the entire vSync section (scanlines 0 – 8). Finally, we allow TimerA to control the sync pin and handle hSync. The time spent on these software loops could be used to add functionality (i.e. SW serial, on-the-fly image generation, etc). vSync could be implemented with TimerA, but TimerA is used for hSync timing. I think it could be done, but it would probably be messy.

The next part of the frame is the vertical blanking period. These are scanlines after vSync, but “above” the visible part of the screen. The scanlines in this section are basically hSync pulses that are not followed by any image data. Nothing really has to be done in software here, as hardware handles hSync.

To produce hardware hSync, TimerA CCR0 is set to the scanline width in clock ticks, and CCR1 is set to the width of hSync. At the end of a scanline, which is the beginning of the next scanline, TAR hits CCR0 and resets to 0. This causes TimerA to bring the sync pin low. When TAR hits CCR1, the sync pin is pulled high again. This creates the hSync pulse at the beginning of every scanline. All the remaining scanlines in the frame will have this hSync pulse.

This takes care of all the sync pulses.

After the vertical blanking period, the scanlines are visible, and it’s time to start sending pixels. Enabling the CCR1 interrupt allows its ISR to run right after the end of every hSync on every scanline. That’s almost the perfect place to start sending out image data. There’s a small blanking period after hSync, so if it started drawing the image right away, it would be to the left of the screen. This is handled with a short software delay.

I tried using software to send the image initially, but I was left unsatisfied with the maximum horizontal resolution of 30 pixels. A comment on a previous post led me back to Batsocks, and their Britishly-titled TellyMate Shield. There, I found out they were using hardware spi to output the image. I realized I could just load the USISR with image data and let it do the rest.

The USI must be clocked to output at the correct bitrate. The slower the clock, the wider the pixels horizontally. Wider pixels mean lower horizontal resolution. This leads to my choice of aspect ratio. If I increased the USI clock divider, I would halve the number of horizontal pixels. This would leave more memory for more vertical pixels, but lead to an aspect ratio taller than wide. A clock divider of 4 and a resolution of 192×40 ended up being the best choice. Here’s the CCR1 ISR that handles drawing the image:


// After vSync, this ISR begins USI output of the image.
#pragma vector=TIMERA1_VECTOR
__interrupt void Timer_A1 (void){
  int wordCounter = 0;
  TAIV = 0;        // Clear TimerA's interrupt vector register;

  USICTL0 |= USIOE;    // USI output enabled
  USICTL0 |= USIPE6;    // Port 1.6 USI data out

  while (TAR < TICKS_HBLANK){}
        do{
                USICNT |= 20;    // arbitrary number > 16.  Keeps USI running
    USISR = currentRowPtr[wordCounter];
    wordCounter++;

    // software delay allowing full USI shift out
    sleep = 0;
    while(sleep < 2)
      sleep++;
    _nop();
    _nop();
    _nop();
    _nop();
    _nop();
    _nop();
    _nop();
  }while(wordCounter < WORDS_SCANLINE);

  if(subRowCounter == ROW_HEIGHT){
    subRowCounter = 0;
    imageOffset += WORDS_SCANLINE;
  }
  subRowCounter++;
  currentRowPtr = &imagePtr[imageOffset];

  while(TAR < TICKS_RIGHT_EDGE){}  // Wait for edge of screen
  USICTL0 &= ~(USIPE6 + USIOE);  // Release control of video pin to software
}

After waiting for hBlank, the USI is started, and loaded with first 16 bit word of image data. It must be loaded again precisely when it runs out, every 64 MCLK cycles. Remember USICLK = MCLK / 4, and 4 * 16 = 64. I wouldn’t exactly call my code self-documenting, so here’s an explanation of the variables and constants for this function:

  • currentRowPtr points to the address of the current row of the image array.
  • wordCounter is the index of the word to be loaded into the USISR.
  • WORDS_SCANLINE (=12) is how many 16 bit words per scanline. 16 * 12 = 192, our horizontal resolution
  • subrowCounter/ROW_HEIGHT – since there are many more scanlines than rows in the image, the same row must be drawn for several scanlines. I’m calling these scanlines subrows, and ROW_HEIGHT is the number of scanline repetitions per image row.
  • imageOffset just moves the image pointer forward along the image by row.

So that’s the general idea of how it works. There are parts of the program I haven’t covered, but they mostly deal with setting up hardware and directing the program flow.

Almost forgot the schematic…

I’d be happy to answer any questions and take any suggestions about the code or my blog. I hope this program is useful or fun for someone! You could make an electronic business card, a holiday greeting, or trick your friends.

Here’s the source code with some example images.

One note — this must be compiled as c++. I had to explicitly tell CCS to do this, so add the files to a new project, and then edit the project properties as follows (click to expand):

Return top