DOC PREVIEW
MIT 6 111 - Study guide

This preview shows page 1 out of 3 pages.

Save
View full document
View full document
Premium Document
Do you want full access? Go Premium and unlock all 3 pages.
Access to all documents
Download any document
Ad free experience
Premium Document
Do you want full access? Go Premium and unlock all 3 pages.
Access to all documents
Download any document
Ad free experience

Unformatted text preview:

Project Proposal Giovanni Reveles and Chuan Zhang 11/03/2006 Overview O ur project im plem ents a “Slap the N injas Aw ay” G am e. A basic description of the game is included in our abstract. The game will be built using the video capture and video display functions of the labkit and FPGA. Luckily, these functions are well documented and hopefully we can learn from past groups. A rough block diagram is included below. Three main modules are included and are described in detail below. Video Decoding Module An N TSC video cam era w ill be used to view players’ hands. Analog data from the N TSC video camera is fed into the AD7185 Analog to Digital decoder chip. The decoder chip outputs a stream of digitized video data, telling the RGB value of each pixel. Timing is controlled using horizontal and vertical sync signals. The video stream will be written into a ZBT. The ZBT w ill store up to 4 fram es at one tim e. The “G raphics O verlay” module will communicate with the ZBT by sending it memory addresses and retrieving pixel data to superim pose on top of gam e im ages. The “Video D ecoding” m odule refers to the RBG converter, the ZBT, and logic which calculates the location of the hand. Pixel data going to the “Graphics O verlay” m odule passes through this bit of code. W e plan to use different color gloves for players of our game. Thus, we should be able to filter out most colors except the desired one. The location of the hand is then fed to the “G am e M odule” to calculate collisions betw een hands. W e borrow ed ideas on how to buffer video images efficiently from Rebecca Arvanites, Cristine Domnisoru with their Interactive Mini-Games project. Image from Decoder Video Decoding Game Module Graphics Overlay Pixel - to display Video Image Hand Position Game GraphicsGame Module The Game Module will be responsible for the game logic involved with the game. It takes as input the location of the hands and determines whether there has been a collision between a hand and a ninja. A collision would then cause the ninja to be slapped off the screen and change its movement to do so. It is also then responsible for generating the appropriate ninja sprites on screen, making them appear at certain intervals and make them move towards the center of the screen unless slapped. A certain number of ninjas will be instantiated off the screen at the beginning and a “pseudo-random” timer will be in charge of making the ninjas displayed on screen. Once they’re off the screen they will be back on the screen after a certain time. The game module will also be responsible for health for the player that goes down if a ninja reaches the center of the screen. If the health reaches zero, the game will be over. ROMs will be used for the ninja sprites and their animations, and the representation of the health/score will also be handled with a ROM. In addition, there will be a game timer that takes care of making sure the game doesn’t run forever. Graphics overlay The Graphics overlay module is responsible of merging the two images, the camera frame and the game graphics generated by the game module. Since the XVGA module we will be using requires a clock frequency of 65MHz for a 60 frames/second refresh on the vga monitor. This is a much faster frequency than what the camera can output so we will be reusing some frames and read directly from the ZBT buffer which allows a 65MHz read access frequency. For each pixel output from the game module it will display on screen, and cases where there is no pixel (black pixel) from the game, it will output the rgb data from the camera. This way, the game graphics are layed “on top” of the camera image. Testing Plan The three main modules of our project will be built independently and tested independently before they are integrated together. Testing of the “Video D ecoding” module will consist of two parts. First, we will make sure we can obtain camera input by generating output from the AD7185 output, feeding this to a ZBT, and then displaying this on a screen. W e w ill bypass the “G raphics O verlay” m odule. O nce, this function is complete, we will add a module intercepting the digitized video data as it is going to the screen and processing it to detect the hands. The image will be filtered by color and the center of mass of the hands will be located. To test this, we will simply display a number on screen for the calculated center of masses. We can roughly see if our algorithm is accurate. The “G am e M odule” w ill be tested by creating a test m odule w hich random ly generates hand positions. We will be able to control these hand positions by using the up, down, left, and right buttons. The hand position image will be juxtaposed with the ninja imageusing a simple OR mux which will be fed to the display. We will run the games as normal except we now try to slap away the ninjas with cursors rather than a hand. Finally, the “G raphics O verlay” m odule w ill be tested by feeding data from a new separate ZBT and overlaying this image on top of a preset pattern of pixels generated. An example could be vertical bars. After each module is individually tested, they will be integrated together. Work Plan G iovanni w ill create the “G raphics O verlay” M odule. Chuan w ill create the “Video Decoding” M odule. G iovanni and Chuan w ill w ork together on the “G am e M odule” unless a further feature needs to be implemented. We plan to work about 60% of the time independently and 40% of the time together. We will spend time together explaining what progress we have made to the other person, helping out with the other’s bugs, and reading and understanding the other’s code. If w e both have a solid understanding of the project, this will help us do


View Full Document

MIT 6 111 - Study guide

Documents in this Course
Verilog

Verilog

21 pages

Video

Video

28 pages

Bass Hero

Bass Hero

17 pages

Deep 3D

Deep 3D

12 pages

SERPENT

SERPENT

8 pages

Vertex

Vertex

92 pages

Vertex

Vertex

4 pages

Snapshot

Snapshot

15 pages

Memories

Memories

42 pages

Deep3D

Deep3D

60 pages

Design

Design

2 pages

Frogger

Frogger

11 pages

SkiFree

SkiFree

81 pages

Vertex

Vertex

10 pages

EXPRESS

EXPRESS

2 pages

Labyrinth

Labyrinth

81 pages

Load more
Download Study guide
Our administrator received your request to download this document. We will send you the file to your email shortly.
Loading Unlocking...
Login

Join to view Study guide and access 3M+ class-specific study document.

or
We will never post anything without your permission.
Don't have an account?
Sign Up

Join to view Study guide 2 2 and access 3M+ class-specific study document.

or

By creating an account you agree to our Privacy Policy and Terms Of Use

Already a member?