tag:blogger.com,1999:blog-52937531705827105502024-02-20T05:29:03.455-08:00VLSI with MATLABVLSI and Matlab Projects for Engineering Students.http://www.blogger.com/profile/12231586279374897112noreply@blogger.comBlogger9125tag:blogger.com,1999:blog-5293753170582710550.post-75740390542382865512015-08-31T23:27:00.001-07:002015-08-31T23:27:12.104-07:00SIMULATION OF EDGE DETECTION SYSTEMS<div dir="ltr" style="text-align: left;" trbidi="on">
<b style="font-family: Arial; text-align: justify;">INTRODUCTION</b><span style="font-family: Arial; text-align: justify;">:</span><br />
<span class="Apple-style-span" style="font-family: Arial; font-size: small;"><span class="Apple-style-span" style="font-size: small;"></span></span><br />
<div style="text-align: justify;">
<span class="Apple-style-span" style="font-family: Arial; font-size: small;"><br /></span></div>
<div style="font-family: Arial; text-align: justify;">
Edge detection is a fundamental tool used in most image processing applications to obtain information from the frames before feature extraction and object segmentation. This process detects outlines of an object and boundaries between objects and the background in the image. Beyond that, Edge Detection refers to the process of identifying and locating sharp discontinuities in intensities in an image. The discontinuities are abrupt changes in pixels intensity which characterize boundaries of objects in a scene structure. This process significantly reduces the amount of date in the image, while preserving the most important structural feature of that image. Edge detection is considered to be the ideal algorithm for images that are corrupted with white noise. The Edge is characterized by its height, slope angle,and horizontal coordinate of the slope midpoint. An ideal Edge Detector should produce an edge indication localized to a single pixel located at the midpoint of the slope.There are many ways to perform Edge detection. However, the majority of different methods may be grouped into two categories, gradient and Laplacian. The basic Edge detection operator is a matrix area gradient operation that determines the level of variance between different pixels. The edge detection operator is calculated by forming a matrix centered on a pixel chosen as the centre of the matrix area. If the value of the matrix area is above a given threshold, then the middle pixel is classified as an edge. Examples of gradient based edge detectors are Roberts, Prewitt and Sobel operators. All the gradient –based algorithms have Kernel operators that calculate the strength of the slope in directions that are orthogonal to each other, generally horizontal and vertical.</div>
<div style="font-family: Arial; text-align: justify;">
<br /></div>
<div style="font-family: Arial; text-align: justify;">
The requirements that the algorithms must meet are:</div>
<div style="font-family: Arial; text-align: justify;">
a)<span class="Apple-tab-span" style="white-space: pre;"> </span>Show the effectiveness and the noise resistance for remote sensing image.</div>
<div style="font-family: Arial; text-align: justify;">
b)<span class="Apple-tab-span" style="white-space: pre;"> </span>Satisfying real time-constraints, and minimizing hardware resources in order to meet embedding requirements.</div>
<div style="font-family: Arial; text-align: justify;">
c)<span class="Apple-tab-span" style="white-space: pre;"> </span>Significantly reducing the amount of date and filters out useless information.</div>
<div style="font-family: Arial; text-align: justify;">
<br /></div>
<div style="font-family: Arial; text-align: justify;">
Classically, Edge detection algorithms are implemented on software. With advances in the VLSI technology hardware implementation has become an attractive alternative. Assigning complex computation tasks to hardware and exploiting the parallelism and pipelining in algorithm yield significant speedup in running times. Implementation image processing on reconfigurable hardware minimizes the time-to-market cost, enables rapid prototyping of complex algorithm and simplifies debugging and verification.</div>
<div style="font-family: Arial; text-align: justify;">
<br /></div>
<div style="font-family: Arial; text-align: justify;">
<b>APPLICATIONS</b>:</div>
<div style="font-family: Arial; text-align: justify;">
<br /></div>
<div style="font-family: Arial; text-align: justify;">
a)<span class="Apple-tab-span" style="white-space: pre;"> </span>Brillouin frequency shift distribution in fibre sensors based on double-technique.</div>
<div style="font-family: Arial; text-align: justify;">
b)<span class="Apple-tab-span" style="white-space: pre;"> </span>Progressive Edge Detection on multi-bit images using polynomial-based binarization.</div>
<div style="font-family: Arial; text-align: justify;">
c)<span class="Apple-tab-span" style="white-space: pre;"> </span>Application of an edge detection method to satellite images for distinguishing sea surface temperature fronts near janpanese coast.</div>
<div style="font-family: Arial; text-align: justify;">
d)<span class="Apple-tab-span" style="white-space: pre;"> </span>An algorithm of sub-pixel edge detection based on ZOM and application in calibration for robot vision.</div>
<div style="font-family: Arial; text-align: justify;">
e)<span class="Apple-tab-span" style="white-space: pre;"> </span>It has some more wide applications such as 3D reconstruction, recognition, image enhancement, image restoration and compression.</div>
<div style="font-family: Arial; text-align: justify;">
<br /></div>
<div style="font-family: Arial; text-align: justify;">
<b>VIDEO DEMO</b></div>
<span class="Apple-style-span" style="font-family: Arial; font-size: small;"><object height="344" width="425"><param name="movie" value="http://www.youtube.com/v/ik-HUJckMLo&hl=en_US&fs=1&"></param>
<param name="allowFullScreen" value="true"></param>
<param name="allowscriptaccess" value="always"></param>
<embed src="http://www.youtube.com/v/ik-HUJckMLo&hl=en_US&fs=1&" type="application/x-shockwave-flash" allowscriptaccess="always" allowfullscreen="true" width="425" height="344"></embed></object></span><br />
<div>
<span class="Apple-style-span" style="font-family: Arial; font-size: small;"><br /></span></div>
<div style="-webkit-text-stroke-width: 0px; color: black; font-family: Arial; font-size: medium; font-style: normal; font-variant: normal; font-weight: normal; letter-spacing: normal; line-height: normal; orphans: auto; text-align: left; text-indent: 0px; text-transform: none; white-space: normal; widows: 1; word-spacing: 0px;">
</div>
</div>
.http://www.blogger.com/profile/12231586279374897112noreply@blogger.com0tag:blogger.com,1999:blog-5293753170582710550.post-46129409941602312862012-10-21T05:23:00.000-07:002013-02-26T22:36:25.632-08:00<div dir="ltr" style="text-align: left;" trbidi="on">
<div style="text-align: center;">
<b style="text-align: left;">DESIGN AND IMPLEMENTATION OF AN FPGA-BASED REAL-TIME VERY LOW RESOLUTION FACE RECOGNITION SYSTEM</b></div>
<div style="text-align: left;">
</div>
<div style="text-align: center;">
<span style="font-weight: bold;"><br /></span></div>
<b><div style="text-align: center;">
<b>Very Low Resolution Face Recognition Problem</b></div>
</b><br />
<div style="text-align: left;">
<b><br /></b></div>
<div>
<div style="text-align: justify;">
<b>This both projects addresses the very low resolution (VLR) </b><b><b>problem in face recognition in which the resolution of the face </b></b><b><b>image to be recognized is lower than 16 16. With the increasing </b></b><b><b>demand of surveillance camera-based applications, the VLR </b></b><b><b>problem happens in many face application systems.</b></b></div>
</div>
<div>
<div style="text-align: justify;">
<br />
<div style="text-align: justify;">
<b><b></b></b></div>
<div style="display: inline !important;">
<div style="display: inline !important;">
<div style="display: inline !important;">
<b><b>Existing face </b></b><b><b>recognition algorithms are not able to give satisfactory performance </b></b><b>on the VLR face image. While face super-resolution (SR) </b><b><b style="text-align: left;"><b>methods can be employed to enhance the resolution of the images, </b></b></b><b><b style="text-align: left;"><b>the existing learning-based face SR methods do not perform well </b></b></b><b style="text-align: left;">on such a VLR face image. To overcome this problem, this project </b><b style="text-align: left;"><b><b>proposes a novel approach to learn the relationship between the </b></b></b><b style="text-align: left;"></b><br />
<div style="display: inline !important; text-align: justify;">
<div style="display: inline !important; text-align: justify;">
<div style="display: inline !important;">
<div style="display: inline !important;">
<div style="display: inline !important;">
<div style="display: inline !important; text-align: justify;">
<div style="display: inline !important;">
<b style="text-align: left;"><b><b>high-resolution image space and the VLR image space for face </b></b></b></div>
</div>
</div>
</div>
</div>
</div>
</div>
<b style="text-align: left;"><div style="display: inline !important; text-align: justify;">
<div style="display: inline !important;">
<div style="display: inline !important; text-align: justify;">
<div style="display: inline !important;">
SR.</div>
</div>
</div>
</div>
</b><br />
</div>
</div>
</div>
<br />
<div style="display: inline !important;">
<div style="display: inline !important;">
<div style="text-align: justify;">
<b style="text-align: left;"></b></div>
<div style="display: inline !important; text-align: justify;">
<div style="display: inline !important;">
<div style="text-align: justify;">
<b style="text-align: left;">Based on this new approach, two constraints, namely, new </b><b style="text-align: left;"><b><b>data and discriminative constraints, are designed for good visuality </b></b></b><b style="text-align: left;"><b><b>and face recognition applications under the VLR problem, </b></b></b><b style="text-align: left;"><b>respectively. Experimental results show that the proposed SR </b></b><b style="text-align: left;"></b><br />
<div style="display: inline !important; text-align: justify;">
<div style="display: inline !important;">
<div style="display: inline !important; text-align: justify;">
<div style="display: inline !important;">
<div style="display: inline !important;">
<div style="display: inline !important;">
<div style="display: inline !important; text-align: justify;">
<div style="display: inline !important;">
<b style="text-align: left;"><b><b><b>algorithm based on relationship learning outperforms the existing </b></b></b></b></div>
</div>
</div>
</div>
</div>
</div>
</div>
</div>
<b style="text-align: left;"><div style="display: inline !important; text-align: justify;">
<div style="display: inline !important;">
<div style="display: inline !important; text-align: justify;">
<div style="display: inline !important;">
<div style="display: inline !important;">
<div style="display: inline !important;">
<div style="display: inline !important; text-align: justify;">
<div style="display: inline !important;">
<b><b><b>algorithms in public face databases.</b></b></b></div>
</div>
</div>
</div>
</div>
</div>
</div>
</div>
</b></div>
</div>
</div>
</div>
</div>
</div>
<br /><b></b>
<b>
<div style="text-align: justify;">
<div style="text-align: justify;">
Design files can be downloaded from below link and for understanding purpose</div>
</div>
<div style="text-align: justify;">
<div style="text-align: justify;">
<br /></div>
</div>
<div style="text-align: justify;">
<div style="display: inline !important;">
<div style="text-align: justify;">
<b><a href="https://sites.google.com/a/verilogcourseteam.com/www/ftp/DesignFiles.zip?attredirects=0&d=1">DOWNLOAD FILES </a></b><br />
<br />
DEMO VIDEO<br />
<div class="separator" style="clear: both; text-align: center;">
<iframe allowfullscreen='allowfullscreen' webkitallowfullscreen='webkitallowfullscreen' mozallowfullscreen='mozallowfullscreen' width='320' height='266' src='https://www.youtube.com/embed/3iLtdG325-8?feature=player_embedded' frameborder='0'></iframe></div>
<br /></div>
</div>
<b>
</b></div>
<div style="text-align: justify;">
<div style="text-align: justify;">
<b></b><br /></div>
<div style="display: inline !important;">
<div style="text-align: justify;">
<b><br /></b></div>
</div>
<b>
</b></div>
</b></div>
</div>
.http://www.blogger.com/profile/12231586279374897112noreply@blogger.com0tag:blogger.com,1999:blog-5293753170582710550.post-36102785418487637152010-06-01T03:46:00.000-07:002010-06-01T03:46:16.709-07:00A VLSI ARCHITECTURE FOR VISIBLE WATERMARKING IN A SECURE STILL DIGITAL CAMERA (S2DC) DESIGN (CORRECTED)<span class="Apple-style-span" style="font-family: Arial; font-size: 13px;"></span><br />
<span class="Apple-style-span" style="font-family: Arial; font-size: 13px;"><div style="text-align: justify;"><b>INTRODUCTION</b></div></span><br />
<div><span class="Apple-style-span" style="font-family: Arial; font-size: small;"><span class="Apple-style-span" style="font-size: 13px;"></span></span><br />
<span class="Apple-style-span" style="font-family: Arial; font-size: small;"><span class="Apple-style-span" style="font-size: 13px;"><div style="text-align: justify;"><b><br />
</b></div><div style="text-align: justify;">WATERMARKING is the process that embeds data called a watermark, a tag, or label into a multimedia object such that the watermark can be detected or extracted later to make an assertion about the object. The object may be an image, audio, video, or text. Whether the host data is in spatial domain, discrete cosine-transformed, or wavelet-transformed, watermarks of varying degree of visibility are added to present media as a guarantee of authenticity, ownership, source, and copyright protection. In general, any watermarking scheme (algorithm) consists of three parts, such as the following: </div><div style="text-align: justify;"><br />
</div><div style="text-align: justify;">1) watermark;</div><div style="text-align: justify;">2) encoder (insertion algorithm);</div><div style="text-align: justify;">3) decoder and comparator (verification or extraction or detection algorithm)</div><div style="text-align: justify;"><br />
</div><div style="text-align: justify;">Whether each owner has a unique watermark or an owner wants to use different watermarks in different objects, the marking algorithm incorporates the watermark into the object. The verification algorithm authenticates the object determining both the owner and the integrity of the object. Watermarks and watermarking techniques can be divided into various categories. The watermarks can be applied either in spatial domain or in frequency domain. It has been pointed out that the frequency-domain methods are more robust than the spatial-domain techniques. On the other hand, the spatial domain watermarking schemes have less computational overhead compared with frequency-domain schemes. According to human perception, the digital watermarks can be divided into four categories:</div><div style="text-align: justify;"><br />
</div><div style="text-align: justify;">1) visible; </div><div style="text-align: justify;">2) invisible-robust;</div><div style="text-align: justify;">3) invisible-fragile;</div><div style="text-align: justify;">4) dual. </div><div style="text-align: justify;"><br />
</div><div style="text-align: justify;">A visible watermark is a secondary translucent image overlaid into the primary image and appears visible to a casual viewer on careful inspection. The invisible-robust watermark is embedded in such a way that modifications made to the pixel value is perceptually not noticed, and it can be recovered only with appropriate decoding mechanism. The invisible-fragile watermark is embedded in such a way that any manipulation or modification of the image would alter or destroy the watermark. A dual watermark is a combination of a visible and an invisible watermark. In this type of watermark, an invisible watermark is used as a back-up for the visible watermark. </div><div style="text-align: justify;"><br />
</div><div style="text-align: justify;"><b>VIDEO DEMO</b><br />
<b><br />
</b></div><object height="344" width="425"><param name="movie" value="http://www.youtube.com/v/hzrXvHjqnrA&hl=en_US&fs=1&"></param><param name="allowFullScreen" value="true"></param><param name="allowscriptaccess" value="always"></param><embed src="http://www.youtube.com/v/hzrXvHjqnrA&hl=en_US&fs=1&" type="application/x-shockwave-flash" allowscriptaccess="always" allowfullscreen="true" width="425" height="344"></embed></object><br />
<br />
<div style="text-align: justify;"><br />
</div><div style="text-align: justify;"><br />
</div></span></span></div>.http://www.blogger.com/profile/12231586279374897112noreply@blogger.com0tag:blogger.com,1999:blog-5293753170582710550.post-31016519738266643712010-06-01T03:38:00.000-07:002010-06-01T03:38:30.003-07:00A ROBUST UART ARCHITECTURE BASED ON RECURSIVE RUNNING SUM FILTER FOR BETTER NOISE PERFORMANCE<div><span class="Apple-style-span" style="font-family: Arial; font-size: small;"><span class="Apple-style-span" style="font-size: 13px;"></span></span><br />
<span class="Apple-style-span" style="font-family: Arial; font-size: small;"><span class="Apple-style-span" style="font-size: 13px;"><div style="text-align: justify;"><b>Introduction</b></div><div style="text-align: justify;"><b><br />
</b></div><div style="text-align: justify;">Serial communication is an essential to computers and allows them to communicate with the low speed peripherals devices such as keyboard the mouse, modems etc. Thus the Universal Asynchronous Receiver Transmitter is the most important component required in serial communication.</div><div style="text-align: justify;">The Universal Asynchronous Receiver/Transmitter (UART) controller is the key component of the serial communications subsystem of a computer. The UART takes bytes of data and transmits the individual bits in a sequential fashion. At the destination, a second UART re-assembles the bits into complete bytes.</div><div style="text-align: justify;"><br />
</div><div style="text-align: justify;">Serial transmission is commonly used with modems and for non-networked communication between computers, terminals and other devices.</div><div style="text-align: justify;"><br />
</div><div style="text-align: justify;">There are two primary forms of serial transmission: Synchronous and Asynchronous. Depending on the modes that are supported by the hardware.</div><div style="text-align: justify;">Some common acronyms are:</div><div style="text-align: justify;"><br />
</div><div style="text-align: justify;">•<span class="Apple-tab-span" style="white-space: pre;"> </span>UART Universal Asynchronous Receiver/Transmitter</div><div style="text-align: justify;">•<span class="Apple-tab-span" style="white-space: pre;"> </span>USART Universal Synchronous-Asynchronous Receiver/Transmitter</div><div style="text-align: justify;"><br />
</div><div style="text-align: justify;">Universal Asynchronous Receiver Transmitter (UART) is used for asynchronous serial data communication between remote embedded systems. Standard UART cores utilize three mid-bit samples to decode the serial data bit and the sampling rate is derived from external timer module. But if the physical channel is noisy then data bits get corrupted during transmission and it leads to wrong data decoding at receiver. To overcome the noise problem a digital low pass filter based architecture is proposed. Recursive Running Sum (RRS) is simple low pass filter, it can be used to remove noise samples from data samples at receiver. Serial receive data signal is directly sampled with system clock and samples are fed to RRS filter. The window size of the filter is user programmable and it decides baud rate. </div><div style="text-align: justify;"><br />
</div><div style="text-align: justify;"><b>VIDEO DEMO</b><br />
<b><br />
</b></div><object height="344" width="425"><param name="movie" value="http://www.youtube.com/v/wZnY_Vfk9MM&hl=en_US&fs=1&"></param><param name="allowFullScreen" value="true"></param><param name="allowscriptaccess" value="always"></param><embed src="http://www.youtube.com/v/wZnY_Vfk9MM&hl=en_US&fs=1&" type="application/x-shockwave-flash" allowscriptaccess="always" allowfullscreen="true" width="425" height="344"></embed></object><br />
<br />
<div style="text-align: justify;"><b><br />
</b></div><div style="text-align: justify;"><br />
</div></span></span></div>.http://www.blogger.com/profile/12231586279374897112noreply@blogger.com0tag:blogger.com,1999:blog-5293753170582710550.post-47720525311449260332010-06-01T03:29:00.000-07:002010-06-01T03:29:13.493-07:00RESEARCH ON FAST SUPER-RESOLUTION IMAGE RECONSTRUCTION BASE ON IMAGE SEQUENCE<div><span class="Apple-style-span" style="font-family: Arial; font-size: small;"><span class="Apple-style-span" style="font-size: 13px;"></span></span><br />
<span class="Apple-style-span" style="font-family: Arial; font-size: small;"><span class="Apple-style-span" style="font-size: 13px;"><div style="text-align: justify;"><b>Introduction</b></div><div style="text-align: justify;"><b><br />
</b></div><div style="text-align: justify;">A lot of application situation need to obtain high resolution video and image, such as astronomy, remote sensing, martial monitor and medical diagnosis (CT,MRI) etc. Due to imaging condition and imaging mode constrain the usual imaging system to obtain high resolution image. Therefore, Obtaining observation image require amplifying processing, Although people can directly obtain high resolution image by enhancing pixel number of the charge coupled device, this method only apply very limited situation. The main reason is high cost and physic difficult to achieve. Traditional single frame scaling algorithm like Nearest Neighbors and Bilinear Interpolation has realized amplifying for one Image. But this algorithm not providing more useful information and actually not improving the resolution of image. </div><div style="text-align: justify;"><br />
</div><div style="text-align: justify;">Due to the continuous image frames including large number similar but not identical information, thus super-resolution image can be reconstructed according to image sequence. A group of low-resolution image sequence which has the same description of scene but each of containing different information is used to reconstruct high-resolution image. The technology is called super-resolution image reconstruction. In 1984 Tsa and Huang first presented super-resolution reconstruction base on the translation image sequence and given reconstruct method base on the frequency domain approximation. It resolved the problem of nonunique solution for the super-resolution image. So far variety algorithms include frequency, project on convex set (POCS), Maximum a Posteriori (MAP) etc have been presented. The study group of MDSP presented super-resolution reconstruction method based on the L1- norm and Bilateral Total Variation (BTV). </div><div style="text-align: justify;"><br />
</div><div style="text-align: justify;"><b>VIDEO DEMO</b><br />
<b><br />
</b></div><object height="344" width="425"><param name="movie" value="http://www.youtube.com/v/n-59gRyJK4E&hl=en_US&fs=1&"></param><param name="allowFullScreen" value="true"></param><param name="allowscriptaccess" value="always"></param><embed src="http://www.youtube.com/v/n-59gRyJK4E&hl=en_US&fs=1&" type="application/x-shockwave-flash" allowscriptaccess="always" allowfullscreen="true" width="425" height="344"></embed></object><br />
<br />
<div style="text-align: justify;"><b><br />
</b></div><div style="text-align: justify;"><br />
</div></span></span></div>.http://www.blogger.com/profile/12231586279374897112noreply@blogger.com0tag:blogger.com,1999:blog-5293753170582710550.post-63159713550531603752010-06-01T03:22:00.000-07:002010-06-01T03:22:46.195-07:00VARIABLE BLOCK SIZE MOTION ESTIMATION HARDWARE FOR VIDEO ENCODERS<div><span class="Apple-style-span" style="font-family: Arial; font-size: small;"><span class="Apple-style-span" style="font-size: 13px;"></span></span><br />
<span class="Apple-style-span" style="font-family: Arial; font-size: small;"><span class="Apple-style-span" style="font-size: 13px;"><div style="text-align: justify;"><b>INTRODUCTION</b></div><div style="text-align: justify;"><br />
</div><div style="text-align: justify;">Multimedia has experienced massive growth in recent years due to improvements in algorithms and technology. An important underlying technology is video coding and in recent years, compression efficiency and complexity have also improved significantly. Applications of video coding have moved from set-top boxes to internet delivery and mobile communications. H.264/AVC is the latest video coding standard adopting variable block size, quarter-pixel accuracy, motion vector prediction and multi-reference frames for motion estimations. These new features result in higher computation requirements than that for previous coding standards. In this thesis, we propose a family of motion estimation processors to balance tradeoffs between the performance, area, bandwidth and power consumption on an field programmable gate array (FPGA) platform. In this method a combination of algorithmic and arithmetic optimizations for motion estimation is used. At the algorithmic level, we compare different algorithms and analyze their complexities. At the arithmetic level, we explore bit-parallel and bit-serial designs, which employ non-redundant and redundant number systems. In our bit-serial design, we study tradeoffs between least significant bit first (LSB-first) and most significant first (MSB-first) modes. </div><div style="text-align: justify;"><br />
</div><div style="text-align: justify;"><br />
</div><div style="text-align: justify;">Finally, we offer a library of motion estimation processors to suit different applications. For bit-parallel processors, we offer 1-dimensional, 2-dimensional systolic based architectures. Together with tree architectures and our proposed bit-serial architecture, our family of processors is able to cover a range of applications. The bit-serial processor is able to support full search, three step search and diamond search. An early termination scheme has been introduced to further shorten the encoding time, and the standard technique is further optimized via H.264/AVC motion vector prediction</div><div style="text-align: justify;"><br />
</div><div style="text-align: justify;"><b>VIDEO DEMO</b><br />
<b><br />
</b></div><object height="344" width="425"><param name="movie" value="http://www.youtube.com/v/h-XnM0RBfko&hl=en_US&fs=1&"></param><param name="allowFullScreen" value="true"></param><param name="allowscriptaccess" value="always"></param><embed src="http://www.youtube.com/v/h-XnM0RBfko&hl=en_US&fs=1&" type="application/x-shockwave-flash" allowscriptaccess="always" allowfullscreen="true" width="425" height="344"></embed></object><br />
<br />
<div style="text-align: justify;"><br />
</div></span></span></div>.http://www.blogger.com/profile/12231586279374897112noreply@blogger.com0tag:blogger.com,1999:blog-5293753170582710550.post-56652931538128553782010-06-01T03:16:00.000-07:002010-06-01T03:16:03.430-07:00FPGA-BASED FACE DETECTION SYSTEM USING HAAR CLASSIFIERS<span class="Apple-style-span" style="font-family: Arial; font-size: small;"><span class="Apple-style-span" style="font-size: 13px;"></span></span><br />
<span class="Apple-style-span" style="font-family: Arial; font-size: small;"><span class="Apple-style-span" style="font-size: 13px;"><div style="text-align: justify;"><b>INTRODUCTION</b></div><div style="text-align: justify;"><b><br />
</b></div><div style="text-align: justify;">Face detection in image sequence has been an active research area in the computer vision field in recent years due to its potential applications such as monitoring and surveillance, human computer interfaces, smart rooms, intelligent robots, and biomedical image analysis. Face detection is based on identifying and locating a human face in images regardless of size, position, and condition. Numerous approaches have been proposed for face detection in images. Simple features such as color, motion, and texture are used for the face detection in early researches. However, these methods break down easily because of the complexity of the real world. Face detection proposed by Viola and Jones is most popular among the face detection approaches based on statistic methods. This face detection is a variant of the AdaBoost algorithm which achieves rapid and robust face detection. They proposed a face detection framework based on the AdaBoost learning algorithm using Haar features. However, the face detection requires considerable computation power because many Haar feature classifiers check all pixels in the images. Although real-time face detection is possible using high performance computers, the resources of the system tend to be monopolized by face detection. Therefore, this constitutes a bottleneck to the application of face detection in real time.</div><div style="text-align: justify;"><br />
</div><div style="text-align: justify;"><b>FACE DETECTION ALGORITHM</b></div><div style="text-align: justify;"><b><br />
</b></div><div style="text-align: justify;">The face detection algorithm proposed by Viola and Jones is used as the basis of our design. The face detection algorithm looks for specific Haar features of a human face. When one of these features is found, the algorithm allows the face candidate to pass to the next stage of detection. A face candidate is a rectangular section of the original image called a subwindow. Generally these sub-windows have a fixed size (typically 24×24 pixels). This sub-window is often scaled in order to obtain a variety of different size faces. The algorithm scans the entire image with this window and denotes each respective section a face candidate. The algorithm uses an integral image in order to process Haar features of a face candidate in constant time. It uses a cascade of stages which is used to eliminate non-face candidates quickly. Each stage consists of many different Haar features. Each feature is classified by a Haar feature classifier. The Haar feature classifiers generate an output which can then be provided to the stage comparator. The stage comparator sums the outputs of the Haar feature classifiers and compares this value with a stage threshold to determine if the stage should be passed. If all stages are passed the face candidate is concluded to be a face. </div><div style="text-align: justify;"><br />
</div><div style="text-align: justify;"><b>VIDEO DEMO</b></div></span></span><span class="Apple-style-span" style="font-family: Arial; font-size: small;"><span class="Apple-style-span" style="font-size: 13px;"><object height="344" width="425"><param name="movie" value="http://www.youtube.com/v/e_dWDx41nSk&hl=en_US&fs=1&"></param><param name="allowFullScreen" value="true"></param><param name="allowscriptaccess" value="always"></param><embed src="http://www.youtube.com/v/e_dWDx41nSk&hl=en_US&fs=1&" type="application/x-shockwave-flash" allowscriptaccess="always" allowfullscreen="true" width="425" height="344"></embed></object><br />
</span></span>.http://www.blogger.com/profile/12231586279374897112noreply@blogger.com0tag:blogger.com,1999:blog-5293753170582710550.post-63340532154484422412010-06-01T03:09:00.000-07:002010-06-01T03:09:50.953-07:00DCT-BASED IMAGEWATERMARKING USING SUBSAMPLING<span class="Apple-style-span" style="font-family: Arial; font-size: small;"><span class="Apple-style-span" style="font-size: 13px;"></span></span><br />
<span class="Apple-style-span" style="font-family: Arial; font-size: small;"><span class="Apple-style-span" style="font-size: 13px;"></span></span><br />
<span class="Apple-style-span" style="font-family: Arial; font-size: small;"><span class="Apple-style-span" style="font-size: 13px;"><div style="text-align: justify;"><b>INTRODUCTION</b></div><div style="text-align: justify;"><b><br />
</b></div><div style="text-align: justify;">WATERMARKING is the process of inserting hidden information in an image by introducing modifications to its pixels with minimum perceptual disturbance. Arecent survey of major techniques, a fixed number of highest magnitude DCT coefficients are randomly perturbed, so that the watermark is placed to the perceptually significant components of the image. Even though the method is quite robust against signal manipulations, the original image must be present for watermark recovery. Recently, the pursuit of a scheme that doesn’t need the original image during watermark recovery has become a topic of intense research. This is partly due to practical issues, like the fact that the recovery process can be simplified without comparison with the original image. Also, in many instances, release of original material for any purposes is not desired or prohibited. Piva et al. developed a DCT-based scheme where the watermark can be identified by calculating the correlation between the watermark sequence and the DCT coefficients of the watermarked image. A similar scheme was proposed by Dugad applied to the DWT domain. In these methods, a significant correlation can be obtained only through a big number of coefficients (typically more than 10 000 ). Fridrich developed a method where a binary-valued watermark is inserted to the low-frequency region based on a mapping function and a spread spectrum signal is added to the midfrequency region.</div><div style="text-align: justify;"><br />
</div><div style="text-align: justify;">A DCT-based image watermarking algorithm is described, where the original image is not required for watermark recovery, and is achieved by inserting the watermark in subimages obtained through subsampling.</div><div style="text-align: justify;"><br />
<b>VIDEO DEMO</b><br />
<b><br />
</b></div></span></span><span class="Apple-style-span" style="font-family: Arial; font-size: small;"><span class="Apple-style-span" style="font-size: 13px;"><object height="344" width="425"><param name="movie" value="http://www.youtube.com/v/F0pvvEvY38c&hl=en_US&fs=1&"></param><param name="allowFullScreen" value="true"></param><param name="allowscriptaccess" value="always"></param><embed src="http://www.youtube.com/v/F0pvvEvY38c&hl=en_US&fs=1&" type="application/x-shockwave-flash" allowscriptaccess="always" allowfullscreen="true" width="425" height="344"></embed></object><br />
</span></span>.http://www.blogger.com/profile/12231586279374897112noreply@blogger.com0tag:blogger.com,1999:blog-5293753170582710550.post-35274371377846735112010-06-01T03:03:00.000-07:002010-06-01T03:03:31.215-07:00VLSI IMPLEMENTATION OF AN EDGE-ORIENTED IMAGE SCALING PROCESSOR<span class="Apple-style-span" style="font-family: Arial; font-size: small;"><span class="Apple-style-span" style="font-size: 13px;"></span></span><br />
<span class="Apple-style-span" style="font-family: Arial; font-size: small;"><span class="Apple-style-span" style="font-size: 13px;"></span></span><br />
<span class="Apple-style-span" style="font-family: Arial; font-size: small;"><span class="Apple-style-span" style="font-size: 13px;"></span></span><br />
<span class="Apple-style-span" style="font-family: Arial; font-size: small;"><span class="Apple-style-span" style="font-size: 13px;"><div style="text-align: justify;"><b>INTRODUCTION</b></div><div style="text-align: justify;"><br />
</div><div style="text-align: justify;">IMAGE scaling is widely used in many fields, ranging from consumer electronics to medical imaging. It is indispensable when the resolution of an image generated by a source device is different from the screen resolution of a target display. For example, we have to enlarge images to fit HDTV or to scale them down to fit the mini-size portable LCD panel. The most simple and widely used scaling methods are the nearest neighbor and bilinear techniques. In recent years, many efficient scaling methods have been proposed in the literature. According to the required computations and memory space, we can divide the existing scaling methods into two classes: lower complexity and higher complexity scaling techniques. The complexity of the former is very low and comparable to conventional bilinear method. The latter yields visually pleasing images by utilizing more advanced scaling methods. In many practical real-time applications, the scaling process is included in end-user equipment, so a good lower complexity scaling technique, which is simple and suitable for low-cost VLSI implementation.</div><div style="text-align: justify;"><br />
</div><div style="text-align: justify;">Kim et al. presented a simple area-pixel scaling method. It uses an area-pixel model instead of the common point-pixel model and takes a maximum of four pixels of the original image to calculate one pixel of a scaled image. By using the area coverage of the source pixels from the applied mask in combination with the difference of luminosity among the source pixels, Andreadis et al. proposed a modified area-pixel scaling algorithm and its circuit to obtain better edge preservation. To obtain better edge-preservation but require about two times more of computations than the bilinear method. To achieve the goal of lower cost, we present an edge-oriented area-pixel scaling processor in this paper. The area-pixel scaling technique is approximated and implemented with the proper and low-cost VLSI circuit in our design. The proposed scaling processor can support floating-point magnification factor and preserve the edge features efficiently by taking into account the local characteristic existed in those available source pixels around the target pixel. Furthermore, it handles streaming data directly and requires only small amount of memory: one line buffer rather than a full frame buffer. The experimental results demonstrate that the proposed design performs better than other lower complexity image scaling methods in terms of both quantitative evaluation and visual quality. The seven-stage VLSI architecture for the proposed design was implemented and synthesized by using Verilog HDL.</div><div style="text-align: justify;"><br />
<br />
<b>VIDEO DEMO</b><br />
<br />
<object height="344" width="425"><param name="movie" value="http://www.youtube.com/v/PnPoPmiNrqg&hl=en_US&fs=1&"></param><param name="allowFullScreen" value="true"></param><param name="allowscriptaccess" value="always"></param><embed src="http://www.youtube.com/v/PnPoPmiNrqg&hl=en_US&fs=1&" type="application/x-shockwave-flash" allowscriptaccess="always" allowfullscreen="true" width="425" height="344"></embed></object></div></span></span>.http://www.blogger.com/profile/12231586279374897112noreply@blogger.com0