Video Conference SystemCSEE 4840 Design DocumentManish Sinha, Srikanth Vemula, William GreeneDepartment of Electrical EngineeringColumbia University{ms3766,sv2271,wmg2110}@columbia.edu1 ABSTRACTIn this document, we present the designdetails for our Video Conference System.We first cover the hardware architecture ofthe system by explaining the SRAM con-troller, SDRAM controller, Ethernet con-troller, VGA controller, and Video con-troller. We then cover the software ar-chitecture of the system by detailing theactivities of the NIOS processor. Follow-ing the hardware and software architec-tures, we include a brief constraint analysissection to verify our design decisions withrespect to our primary system constraint:maintaining realtime video streaming. Fi-nally, we conclude with a roadmap detail-ing the three upcoming milestones and howwe intend to reach them.2 INTRODUCTIONThe overall system will contain the following com-ponents: two cameras with composite output,two Altera DE2 boards, two LCD monitors withVGA interface, and one Ethernet switch. Thefigure below illustrates this setup:Figure 1: System layout [1] [2] [3]The cameras will interface with the Altera DE2boards using the CVBS protocol [4]. The AlteraDE2 boards will interface with the LCD monitorusing the VGA protocol. The Altera DE2 boardswill communicate with the switch using theIP protocol. The underlying transport layerprotocol will be UDP.On the LCD monitor, the user will see asplit-screen: the left half of the screen willcontain the video coming in from the localcamera; the right half of the screen will containthe incoming video produced by the user onthe other end of the network. The decision toinclude the local video on the LCD screen wasmade to ease debugging.3 HARDWARE ARCHI-TECTUREThe hardware architecture is as follows:Figure 2: Block Diagram3.1 SRAM CONTROLLERThe SRAM stores the video information that isreceived from the ADV718B Video Decoder. Theoutput of the video decoder is first stored in aline buffer (not located in the SRAM). This linebuffer is eventually transferred to the SRAM viathe NIOS. The data that is received through thenetwork is stored in another buffer called the databuffer which is also a part of the SRAM. The1SRAM controller provides the necessary addresssignals and control signals to write into or readfrom the two buffers in the SRAM when neces-sary. Each pixel needs 4 bits. Since each memorylocation can store 16 bits of data, we can store 4pixels of data in each memory address. The twofigures below detail the necessary signal timingwe will need to adopt in order to interface withthe SRAM.Figure 3: Timing considerations when SRAM isread [5].Figure 4: Timing considerations when SRAM iswritten [5].3.2 SDRAM CONTROLLERSince the memory available on the SRAM is notsufficient to store the buffers and the C code, wewill use the SDRAM to store the C code. Usingthe SDRAM IP available in the SOPC builder,we will instantiate the SDRAM controller in theNIOS system that we plan to build. The portmapping has to be done in the top level entity.The SDRAM clock has to run faster than theNIOS system clock by 3 nanoseconds. This is en-sured by having a phase locked loop component.3.3 ETHERNET CONTROLLERFor the ethernet controller, we plan to use theexisting verilog file provided in lab 2 in conjunc-tion with SOPC builder. A driver, written in C,is also provided in lab 2 which we will make useof because we will use UDP as well.3.4 VGA CONTROLLERThe VGA controller reads the data from boththe video and the data buffer and displaysboth on the VGA. We will build on the VGAcontroller that we used in lab 3. The controlleralso provides the necessary sync signals andensures that the video from the two buffers isdisplayed properly on the VGA. The VGA willbe divided into two halves. One half will be usedfor displaying the video buffer that is the data re-ceived from the camera and the other half will befor displaying the data received from the network.In order to maintain a good dynamic rangewith respect to luminance, the brightness of eachpixel should be scaled with respect to maximumamplitude. Therefore, for every n-bit representa-tion of a decoded camera pixel, an m-bit pixelwill be provided to the VGA controller such thatthe maximum value of the n-bit pixel2n(1)will correspond to a similarly maximal m-bit pixel2m(2)and all other values will be scaled in a convenientmanner down to corresponding values of 0luminance. Such an approach may be necessaryto maintain good performance, though thiswill be subject to experimentation. We willtry to use mathematical techniques such asinterpolation. If the mathematical operationscan not be accelerated enough, then numericalapproximations will be used.One technique we may utilize is a “bit stagger-ing” approach by which we inflate a 4-bit pixel toan 8-bit pixel by alternately inserting, in order,a bit taken from the 4-bit number and a ‘0’ bitso that an 8-bit output value is generated. As anexample, take “1101”. This would correspond toa staggered 8-bit value of “10100010”.The pseudo-code algorithm for the VGAController is on the following page.2Figure 5: VGA controller pseudo-code.3.5 VIDEO CONTROLLERWe will use the Analog Devices ADV7181Video Decoder to facilitate the decoding of thecomposite video signal from the camera. We willsample pixel data output from the ADV7181.Using a VHDL implementation, we will tie inthe appropriate pins and control signals suchthat a hardware specified procedure will properlymonitor the ADV7181 and buffer the outputdata in a manner suitable to construct a framebuffer in SRAM. Because the ADV7181 is highlyconfigurable, it will be necessary to describethe use of the device generally. In the courseof implementing the project, we will investigateparticular features in further depth. A keyresponsibility of the video controller is to providesignals for other processes to ensure that theyare properly synchronized with the data that isbeing sampled from the ADV7181.The ADV7181 provides three pins specifi-cally suited toward processing the encoded videosignal: these are the SYNC pins ([6] p. 38).The horizontal sync, or HS, pin will signal thata horizontal synchronization has been read andimplies that the VHDL procedure should prepareto receive a burst of pixel data, referred to asactive video. An active video burst, which isessentially an encoded horizontal row of pixeldata, is always preceded by a SAV sequence andfollowed by an EAV sequence.
View Full Document