1 Interactive Video Effects and Games Rebecca Arvanites Cristina Domnisoru 6.111 Introductory Digital Systems Laboratory Professors Chris Terman and Ike Chuang December 14, 2005 Abstract This report documents the development and implementation of live image processing to produce video effects and two interactive games. The general architecture of the project consists of two sets of modules, the video decoding and effect-adding modules, and the game modules which process game logic. A double buffer system is used to store pixel data from the camera and read the previous frame, and for the effect module to write a frame while another is being displayed. One game implemented in this project is a sliding puzzle game using blocks of the camera video as the puzzle pieces that the player rearranges. Another game overlays falling objects on the camera video and uses color detection to allow the player to interact with the game by catching the falling blobs and avoiding the falling bombs.2Table of Contents Overview (by Rebecca Arvanites) Module Descriptions Subsystem 1 (by Cristina Domnisoru) 1.1 Fvhdelayer Module 1.2 Ntsc to ZBT Module 1.3 Camera Frame Swapper Module 1.4 Display Frame Swapper Module 1.5 Vram Display Module 1.6 Ntsc Decoder Module Subsystem 2 (by Rebecca Arvanites) 2.1 Puzzle Game Module 2.2 Falling Blob Game Module 2.3 Linear Feedback Shift Register (Xilinx Core) 2.4 Find Color Module Testing and Debugging Conclusions Appendix A: Subsystem 1 Verilog Code (by Cristina Domnisoru) 1.1 Fvhdelayer Module 1.2 Ntsc to ZBT Module 1.3 Camera Frame Swapper Module 1.4 Display Frame Swapper Module 1.5 Vram Display Module 1.6 Ntsc Decoder Module Appendix B: Subsystem 2 Verilog Code (by Rebecca Arvanites) 2.1 Puzzle Game Module 2.2 Falling Blob Game Module 2.3 Linear Feedback Shift Register (Xilinx Core) 2.4 Find Color Module3List of Figures High-Level Block Diagram Register Arrays to Store Square Location Information Absolute Square Location Puzzle Game Block Diagram Falling Blob Game Screenshot 1 Falling Blob Game Screenshot 2, with paddle_position_avg shown in dark blue Figure : Falling Blob Game Module and Find Color Module Block Diagrams Effect Figures: assorted4Overview Our interactive video project had two main goals when we started: to apply interesting effects to video from a camera, and to make games in which the user interacts with the applied video effects. The user of the system will see themselves on the screen, and their actions are incorporated into the game while effects are added. This involves communicating with the camera, adding effects to the video, processing game logic, and displaying the altered video. The structure of the project is made up of two main sets of modules, the video processing and effect modules, and the game modules. The video processing and effect modules read and store data from the camera, add effects to the frame, then write and display the video frames. A double buffer system is used such that the camera writes to one frame while the video processing modules read from another, and the frames swap when the camera has finished writing; similarly the effect module writes to one frame while the display module reads from the other, until the effect module has written the whole frame and the reader and writer swap frames. The game modules communicate with the video effect modules by telling them what to display according to the game logic. In the puzzle game, the camera video is rearranged by blocks of pixels, creating a sliding tile puzzle where the player switches adjacent squares with a blank square in attempts to put the video in normal order. Buttons are used to select the current square and swap its location with a blank square. In the falling blob game, the camera video of the player is overalyed by falling game-generated blob objects, some of which the player wants to catch and some of which the player is trying to avoid. The player uses a red paddle card or their hand to catch the blobs on the screen, gaining points for catching square objects and losing points for coming into contact with falling bombs. Subsystem 1 (by Cristina Domnisoru) Overall Design The overall design of the graphics system is as follows: NTSC data is sent from the camera through a decoder that parses the stream into coherent pixels in YCrCb format and processes the sync signals from the camera. The pixels are then sent to an RGB converter that also delays the sync signals to account for the time required to compute the conversion to RGB. This module keeps 6 bits for each of the 3 channels, each pixel becoming 18 bits long. The 18 bit pixels are sent to a memory writing module that clumps pairs of adjacent pixels together and writes 2 pixels to a ZBT 1 memory location (which is 36 bits wide). An effect module reads the pixels out of that memory, processes them and writes them to a ZBT 0 memory location. (ZBT1 and ZBT0 are the two ZBTs that the Xilinks Labkit contains). A display module reads the pixels out of ZBT 0 (these are processed pixels) and outputs them to the screen. The display module reads one memory address at a time and outputs the two pixels contained in it one after the other. Memory Constraints5Before elaborating on the particular modules involved, we will provide an explanation of the memory requirements that constrained the entire design. Given a limited amount of memory, we had to make two basic choices: how many pixels to store per frame and how many bits to store per pixel. A 720 x 480 image has 365.600 pixels. A ZBT ram has 524.288 rows. Since the camera outputs 720 * 480 pixels per frame, we decided that there was no need to store more pixels than that per frame (interpolation could always be done at the output to obtain a larger image ). At that resolution, there are enough rows in one ZBT to store one frame at one pixel per row, or two frames at 2 pixels per row. Since each row of a ZBT is 36 bits wide, we decided to store 2 pixels per row, implying 18 bits per pixel and 6 bits out of 8 per channel. These choices allowed us to store 4 frames in memory, 2 per ZBT. ZBT’s are convenient for image processing because they do not have wasted cycles and can be clocked at 65 MHz , the required speed for XVGA, sufficient to accommodate our display. Buffer System Our design uses 4 buffers (since we can store 4 frames in memory). 2 buffers are used by the camera to store input
View Full Document