Wednesday, July 29, 2009

New Robot With Artificial Skin To Improve Human Communication

Work is beginning on a robot with artificial skin which is being developed as part of a project involving researchers at the University of Hertfordshire so that it can be used in their work investigating how robots can help children with autism to learn about social interaction.Kaspar.Credit: Image courtesy of University of Hertfordshire

Professor Kerstin Dautenhahn and her team at the University’s School of Computer Science are part of a European consortium, which is working on the three-year Roboskin project to develop a robot with skin and embedded tactile sensors.

According to the researchers, this is the first time that this approach has been used in work with children with autism.

The researchers will work on Kaspar (http://kaspar.feis.herts.ac.uk/), a child-sized humanoid robot developed by the Adaptive Systems research group at the University. The robot is currently being used by Dr. Ben Robins and his colleagues to encourage social interaction skills in children with autism. They will cover Kaspar with robotic skin and Dr Daniel Polani will develop new sensor technologies which can provide tactile feedback from areas of the robot’s body. The goal is to make the robot able to respond to different styles of how the children play with Kaspar in order to help the children to develop ‘socially appropriate’ playful interaction (e.g. not too aggressive) when interacting with the robot and other people.

“Children with autism have problems with touch, often with either touching or being touched,” said Professor Kerstin Dautenhahn. “The idea is to put skin on the robot as touch is a very important part of social development and communication and the tactile sensors will allow the robot to detect different types of touch and it can then encourage or discourage different approaches.”

Roboskin is being co-ordinated by Professor Giorgio Cannata of Università di Genova (Italy). Other partners in the consortium are: Università di Genova, Ecole Polytechnique Federale Lausanne, Italian Institute of Technology, University of Wales at Newport and Università di Cagliari.



Tuesday, July 28, 2009

LTE Technology

LTE (Long Term Evolution) is the last step toward the 4th generation of radio technologies designed to increase the capacity and speed of mobile telephone networks. Where the current generation of mobile telecommunication networks are collectively known as 3G, LTE is marketed as and called 4G, although technically it is 3.9G. Most major mobile carriers in the United States and several worldwide carriers have announced plans to convert their networks to LTE beginning in 2009. LTE is a set of enhancements to the Universal Mobile Telecommunications System (UMTS) which will be introduced in 3rd Generation Partnership Project (3GPP) Release 8. Much of 3GPP Release 8 will focus on adopting 4G mobile communications technology, including an all-IP flat networking architecture.

LTE Advanced is a mobile communication standard. It is currently being standardized by the 3rd Generation Partnership Project (3GPP) as a major enhancement of 3GPP Long Term Evolution. LTE (Long Term Evolution) standardization has come to a mature state by now where changes in the specification are limited to corrections and bug fixes. LTE mobile communication systems are expected to be deployed from 2010 onwards as a natural evolution of Global system for mobile communications (GSM) and Universal Mobile Telecommunications System (UMTS).

Presentation Links

Click here to download Motorola White Paper (Presentation Paper with Abstract) on LTE

Click on the link to download ppt shot (1) and ppt shot (2)

Wikipedia Links Click Here (LTE 3GPP)
Click Here (LTE Advanced)

Sunday, July 26, 2009

Google Wave



Google Wave is "a personal communication and collaboration tool" announced by Google at the Google I/O conference on May 27, 2009. It is a web based service, computing platform, and communications protocol designed to merge e-mail, instant messaging, wiki, and social networking.It has a strong collaborative and real-time focus supported by robust spelling/grammar checking, automated translation between 40 languages, and numerous other extensions. It was announced in Google's official blog on Tuesday 20 July 2009 that the preview of Google Wave will be extended to about 100 000 users on September 30, 2009 At this point, the current view of the Google Wave Website (which is a preview with the Keynote embedded), will be replaced with wave itself.

Presentation Links

Presentation Video Link for Developers Click Here

Wikipedia Link Click Here


Saturday, July 25, 2009

aVsSEMINARs Launched

Hai Friends,

Our new blog platform for seminar topics only started.you can get all topics which are developed in last 5- 7 years in www.avsseminar.blogspot.com . All the topics includes presentations also.But latest posts in aVsonline are not available in aVsseminar.So for latest news and technical topics log on to aVsonline

Thursday, July 23, 2009

Eye-Fi adds Wi-Fi to any Digital Camera!


First your phone went wireless, then your laptop, now finally your camera!
Never scrounge around for a USB cable again! Eye-fi is a magical orange SD memory card that will not only store 2GB worth of pictures, it'll upload them to your computer, and to Flickr, Facebook, Picasa (or 14 others) wirelessly, invisibly, automatically!
This little guy looks like a normal 2GB memory card and works with nearly any camera that takes SD memory. There are no antennas, no protrusions, no subscription fees, and no cables.
Here's how it works: You set up the card once with the included USB card reader (tell it which wireless network it should use, and type in the password if you have one), choose the photo sharing service of your choice (you have plenty of options), then slip the card in your camera.
From then on, you never have to touch anything. Just take photos. Whenever your cameras near the wireless network you selected and idle, Eye-fi will upload all your photos (JPEGs only) to your online photo sharing service. Next time your computer's online, they'll download there, too!
Yes, it is practically a magic.
Eye-Fi SD card comes in three varieties:
1). Eye-Fi Home ($79.99)
2). Eye-Fi Share ($99.99)
3). Eye-Fi Explore ($129.99)

Trend links

Wikipedia Link Click Here

The Morph concept __NOKIA

Morph Wrist mode

Launched alongside The Museum of Modern Art “Design and The Elastic Mind” exhibition, the Morph concept device is a bridge between highly advanced technologies and their potential benefits to end-users. This device concept showcases some revolutionary leaps being explored by Nokia Research Center (NRC) in collaboration with the Cambridge Nanoscience Centre (United Kingdom) – nanoscale technologies that will potentially create a world of radically different devices that open up an entirely new spectrum of possibilities.

Morph concept technologies might create fantastic opportunities for mobile devices:

  • Newly-enabled flexible and transparent materials blend more seamlessly with the way we live
  • Devices become self-cleaning and self-preserving
  • Transparent electronics offering an entirely new aesthetic dimension
  • Built-in solar absorption might charge a device, whilst batteries become smaller, longer lasting and faster to charge
  • Integrated sensors might allow us to learn more about the environment around us, empowering us to make better choices
In addition to the advances above, the integrated electronics shown in the Morph concept could cost less and include more functionality in a much smaller space, even as interfaces are simplified and usability is enhanced. All of these new capabilities will unleash new applications and services that will allow us to communicate and interact in unprecedented ways.Flexible & Changing Design

Nanotechnology enables materials and components that are flexible, stretchable, transparent and remarkably strong. Fibril proteins are woven into a three dimensional mesh that reinforces thin elastic structures. Using the same principle behind spider silk, this elasticity enables the device to literally change shapes and configure itself to adapt to the task at hand.

A folded design would fit easily in a pocket and could lend itself ergonomically to being used as a traditional handset. An unfolded larger design could display more detailed information, and incorporate input devices such as keyboards and touch pads.

Even integrated electronics, from interconnects to sensors, could share these flexible properties. Further, utilization of biodegradable materials might make production and recycling of devices easier and ecologically friendly.

Self-Cleaning

Nanotechnology also can be leveraged to create self-cleaning surfaces on mobile devices, ultimately reducing corrosion, wear and improving longevity. Nanostructured surfaces, such as “Nanoflowers” naturally repel water, dirt, and even fingerprints utilizing effects also seen in natural systems.

Advanced Power Sources

Nanotechnology holds out the possibility that the surface of a device will become a natural source of energy via a covering of “Nanograss” structures that harvest solar power. At the same time new high energy density storage materials allow batteries to become smaller and thinner, while also quicker to recharge and able to endure more charging cycles.

Sensing The Environment

Nanosensors would empower users to examine the environment around them in completely new ways, from analyzing air pollution, to gaining insight into bio-chemical traces and processes. New capabilities might be as complex as helping us monitor evolving conditions in the quality of our surroundings, or as simple as knowing if the fruit we are about to enjoy should be washed before we eat it. Our ability to tune into our environment in these ways can help us make key decisions that guide our daily actions and ultimately can enhance our health.

Links for Nokia morph

Click here to view the video (.mov file 46mb size)

For more Pictures n Views Click here

Click here for Nokia Press Release

Wikipedia Link Click here

Sunday, July 19, 2009

Ambient Intelligence

Defined by the EC Information Scociety Technologies Advisory Group in a vision of the Information Society, Ambient Intelligence emphasises on greater user-friendliness, more efficient services support, user-empowerment, and support for human interactions. In this vision, people will be surrounded by intelligent and intuitive interfaces embedded in everyday objects around us and an environment recognising and responding to the presence of individuals in an invisible way by year 2010.
Ambient Intelligence builds on three recent key technologies: Ubiquitous Computing, Ubiquitous Communication and Intelligent User Interfaces – some of these concepts are barely a decade old and this reflects on the focus of current implementations of AmI (more on this later on). Ubiquitous Computing means integration of microprocessors into everyday objects like furniture, clothing, white goods, toys, even paint. Ubiquitous Communication enables these objects to communicate with each other and the user by means of ad-hoc and wireless networking. An Intelligent User Interface enables the inhabitants of the AmI environment to control and interact with the environment in a natural (voice, gestures) and personalised way (preferences, context).

Making AmI real is no easy task: as it commonly takes place with a new technology, soon after high-flying visions we are demonstrated with the first pieces of hardware for the intelligent environment. However, making a door knob able to compute and communicate does not make it intelligent: the key (and challenge) to really adding wit to the environment lies in the way how the system learns and keeps up to date with the needs of the user by itself. A thinking machine, you might conclude – not quite but close: if you rely on the intelligent environment you expect it to operate correctly every time without tedious training or updates and management. You might be willing to do it once but not constantly even in the case of frequent changes of objects, inhabitants or preferences in the environment. A learning machine, I'll say.

The following articles in this special theme issue showcase the various aspects of AmI research in Europe. In addition to background information on AmI related activities within the ERCIM members we have a number of articles on the infrastructure for AmI environments followed with algorithms adding some of the intelligence required to reach our goal for 2010.

Presentation Links

Click here to download PowerPoint presentation

Wikipedia Link Click here

ISTAG; Scenarios for Ambient Intelligence in 2010; Final Report,
Click here

Saturday, July 18, 2009

Moonlight (runtime) Plugin

Moonlight is an open source implementation of the Silverlight browser plug-in, based on Mono (an open source implementation of .NET).

Moonlight is being jointly developed by Microsoft and Novell to:

  • allow Silverlight applications to run on Linux
  • offer a Linux SDK (software development kit) for Silverlight applications
  • use the existing Silverlight engine to develop desktop applications.

Like Silverlight, Moonlight manifests as a runtime environment for browser-based rich Internet applications (RIAs) and, similarly, adds to animation, video playback and vector graphics capabilities. Developers are also creating desktop widgets called "desklets" to extend Moonlightapplications beyond the browser.

Presentation links

Click here to download Power Point Presentation

Wikipedia Link Click here

Storage Area Network

The Storage Network Industry Association (SNIA) defines the SAN as a network whose primary purpose is the transfer of data between computer systems and storage elements. A SAN consists of a communication infrastructure, which provides physical connections; and a management layer, which organizes the connections, storage elements, and computer systems so that data transfer is
secure and robust. The term SAN is usually (but not necessarily) identified with block I/O services rather than file access services. A SAN can also be a storage system consisting of storage elements, storage
devices, computer systems, and/or appliances, plus all control software,
communicating over a network.

A SAN allows “any-to-any” connection acrossthe network, using interconnect elements such as routers, gateways, hubs, switches and directors. It eliminates the traditional dedicated connection between a server and storage, and the concept that the server effectively “owns and manages” the storage devices. It also eliminates any restriction to the amount of data that a server can access, currently limited by the number of storage devices attached to the individual server. Instead, a SAN introduces the
flexibility of networking to enable one server or many heterogeneous servers to share a common storage utility, which may comprise many storage devices, including disk, tape, and optical storage. Additionally, the storage utility may be located far from the servers that use it.
The SAN can be viewed as an extension to the storage bus concept, which enables storage devices and servers to be interconnected using similar elements as in local area networks (LANs) and wide area networks (WANs): Routers, hubs, switches, directors, and gateways. A SAN can be shared between servers
and/or dedicated to one server. It can be local, or can be extended over geographical distances.

Presentation Links

Click Here to download the Power point presentation

Wikipedia Link click here

SPINS -Security Protocol For Sensor Network

s sensor networks edge closer towards wide-spread deployment, security issues become a central concern. Sensor networks have been identified as being useful in a variety of domains to include the battlefield and perimeter defense. So far, much research has focused on making sensor networks feasible and useful, and has not concentrated on security.

We present a suite of security building blocks optimized for resource constrained environments and wireless communication. SPINS has two secure building blocks: SNEP and ήTESLA SNEP provides the following important baseline security primitives: Data confidentiality, two-party data authentication, and data freshness.

A particularly hard problem is to provide efficient broadcast authentication, which is an important mechanism for sensor networks. ήTESLA is a new protocol which provides authenticated broadcast for severely resource-constrained environments. We implemented the above protocols, and show that they are practical even on minimal hardware: the performance of the protocol suite easily matches the data rate of our network. Additionally, we demonstrate that the suite can be used for building higher level protocols. ..

Presentation Links

Click here to download The power point presentation

For pdf infofile Click here

Wednesday, July 15, 2009

Cloud Computing

what is cloud computing?
Today, a growing number of businesses rely on the delivery of IT infrastructure and
applications over the Internet (or “the cloud”) to cost-effectively provide various IT
applications. Couple that with advancements in virtualization technology, expanding
bandwidth and the need to cut costs — and you can sense a fundamental shift in the
way many businesses approach IT software and hardware investments.
Intel defnes cloud computing as a computing model where services and data reside in
shared resources in scalable data centers. Any authenticated device over the Internet
can access those services and data.
The cloud has three components:
1 Cloud architecture: Services and data reside in shared, dynamically scalable resource
pools, based on virtualization technologies and/or scalable application environments.
2 Cloud service: The service delivered to enterprises over the Internet sits on cloud
architecture and scales without user intervention. Companies typically bill monthly
for service based on usage.
3 Private cloud: Cloud architecture is deployed behind an organization’s firewall
for internal use as IT-as-a-service.

Presentation Links

Click here to download Microsoft's PowerPoint presentation

For more Click here to download the pdf document from Intel

Wikipedia link Click here

Tuesday, July 14, 2009

Microsoft Office 2010 is Coming

Microsoft Office 2010, codenamed Office 14, is the successor of Microsoft Office 2007, a productivity suite for Microsoft Windows. Extended file compatibility, user interface updates, and a refined user experience are planned for Office 2010. A 64-bit version of Office 2010 will be available. It will be available for Windows XP SP3, Windows Vista and Windows 7. Microsoft plans to release Office 2010 in the second half of 2010.

Microsoft Office 2010, as revealed by the just-released Technical Preview, brings a set of important if incremental improvements to the market-leading office suite. Among them: making the Ribbon the default interface for all Office applications, adding a host of new features to individual applications such as video editing in PowerPoint and improved mail handling in Outlook and introducing a number of Office-wide productivity enhancers, including photo editing tools and a much-improved paste operation.

Missing from the Technical Preview is what will be the most important change to Office in years -- a Web-based version for both enterprises and consumers. Also missing from the preview is access to Office for mobile phones and other mobile clients. Those features will be introduced in later versions of the software; the final version is expected to ship in the first half of 2010

Technical Review Link

Click here to download the technical link for ms office 2010

Wikipedia link Click here

Monday, July 13, 2009

IEEE 802.11n –Next Generation Wireless Standard

The newest standard in Wireless LAN is called 802.11n. 802.11 is an industry standard for high-speed networking. 802.11n is designed to replace the 802.11a, 802.11b and 802.11g standards. 802.11n equipment is backward compatible with older 802.11gab and it supports much faster wireless connections over longer distances. So-called “Wireless N” or “Draft N” routers available today are based on a preliminary version of the 802.11n. The beta version of this standard is used now in laptops and routers. 802.11n will work by utilizing multiple input multiple output (MIMO) antennas and channel bounding in tandem to transmit and receive data. It contains at least 2 antennas for transmitting data’s. 802.11n will support bandwidth greater than 100 Mbps and in theory it can have a speed of 600 Mbps.It can be used in high speed internets, VOIP, Network Attach Storage (NAS), gaming. The full version will be implemented in the laptops and in the LANs in upcoming years.


Presentation Links

Please Click here to download the Microsoft's Powerpoint presentation

Wikipedia Link Click here

Inferno OS

Inferno is an operating system for creating and supporting distributed services .The name of the operating system and of its associated programs, as well as of the company Vita Nuova Holding that produces it, were inspired by the litrary works of Dante Alighieri, particularly the Divine Comedy
Inferno runs in hosted mode under several different operating systems or natively on a range of hardware architectures. In each configuration the operating system presents the same standard interfaces to its applications. A communications protocol called Styx is applied uniformly to access both local and remote resources.
Applications are written in the type-safe Limbo programming language, whose binary representation is identical over all platforms.

A communications protocol called Styx is applied uniformly to access both local and remote resources, which applications use by calling standard file operations, open, read, write, and close. As of the fourth edition of Inferno, Styx is identical to Plan 9's newer version of its hallmark 9P protocol, 9P2000.

The name of the operating system and of its associated programs, as well as of the company Vita Nuova Holdings that produces it, were inspired by the literary works of Dante Alighieri, particularly the Divine Comedy.


Presentation Links

Click here to download the ppt presentation for Inferno OS

Wikipedia link is here please Click here

Bayesian network

A Bayesian network, belief network or directed acyclic graphical model is a probabilistic graphical model that represents a set of random variables and their conditional independencies via a directed acyclic graph (DAG). For example, a Bayesian network could represent the probabilistic relationships between diseases and symptoms. Given symptoms, the network can be used to compute the probabilities of the presence of various diseases.

Formally, Bayesian networks are directed acyclic graphs whose nodes represent variables, and whose missing edges encode conditional independencies between the variables. Nodes represent random variables, but in the Bayesian sense: they may be observable quantities, latent variables, unknown parameters or hypotheses. Efficient algorithms exist that perform inference and learning in Bayesian networks. Bayesian networks that model sequences of variables (e.g. speech signals or protein sequences) are called dynamic Bayesian networks. Generalizations of Bayesian networks that can represent and solve decision problems under uncertainty are called influence diagrams.

Presentation Links

Please click here to download Microsoft Powerpoint presentation

Wikipedia Link Click here

Sunday, July 12, 2009

3D optical data Storage

3D optical data storage is the term given to any form of optical data storage in which information can be recorded and/or read with three dimensional resolution (as opposed to the two dimensional resolution afforded, for example, by CD).

This innovation has the potential to provide terabyte-level mass storage on DVD-sized disks. Data recording and readback are achieved by focusing lasers within the medium. However, because of the volumetric nature of the data structure, the laser light must travel through other data points before it reaches the point where reading or recording is desired. Therefore, some kind of nonlinearity is required to ensure that these other data points do not interfere with the addressing of the desired point.

No commercial product based on 3D optical data storage has yet arrived on the mass market, although several companies are actively developing the technology and predict that it will become available by 2010.

Presentation Links

Click here for 3D volume storage in pdf

Wikipedia Link Click here


Green Dam For Youth Escort

Green Dam Youth Escort is content-control software developed in the People's Republic of China (PRC). Under a directive from the Ministry of Industry and Information Technology (MIIT) of the PRC taking effect on 1 July 2009, it is mandatory to have either the software, or its setup files pre-installed on, or shipped on a compact disc with, all new personal computers sold in Mainland China, including those imported from abroad. End users, however, are not under mandate to run the software.

As of 30 June 2009, the mandatory pre-installation of the Green Dam software on new computers has been delayed to an unknown date.However, Asian brands Sony, Acer, Asus, BenQ and Lenovo etc. are shipping the software as was originally ordered.

The buffer overflow flaw exists in the latest, patched version of Green Dam, 3.17, according to security researcher "Trancer," who claims authorship of the attack code.
"I wrote a Metasploit exploit module for Internet Explorer, which exploits this stack-based, buffer overflow vulnerability in Green Dam 3.17," Trancer wrote in his Recognize-Security blog. "I've tested this exploit successfully on the following platforms: IE6, Windows XP SP2, IE7, Windows XP SP3, Windows Vista SP1."

The attack code, which has been posted to the Milw0rm Web site for proof-of-concept exploits, has been circulating in the wild for a week, according to security consultant and ZDNet blogger Dancho Danchev.

The Chinese government has ordered Green Dam censorware, billed as a pornography filter, to come preinstalled on all PCs sold in the country beginning July 1. Jinhui Computer System Engineering, which produces the software, patched Green Dam after a team from the University of Michigan exposed a buffer overflow flaw in it.

Last week, the researchers said in an addendum to their original paper that despite this patch, the software remains vulnerable to buffer overflow attacks, which indicates that Green Dam's security problems "run deep."

Green Dam intercepts Internet traffic using a library called SurfGd.dll. Even after the patch, SurfGd.dll still uses a fixed-length buffer to process Web site requests, the researchers explained. Malicious Web sites could overrun this buffer to take control of the execution of applications on a target computer.

"The program now checks the lengths of the URL and individual HTTP request headers, but the sum of the lengths is erroneously allowed to be greater than the size of the buffer," wrote the researchers. "An attacker can compromise the new version by using both a very long URL and a very long 'Host' HTTP header. The pre-update version, 3.17, which we examined in our original report, is also susceptible to this attack."

Green Dam is also vulnerable to a blacklisting flaw, identified by University of Michigan researchers Scott Wolchok, Randy Yao, and J. Alex Halderman, which could allow third parties to upload malware via an innocuous-seeming update.

Presentation Link

Click here to download the presentation style in pdf

Wikipedia Link for greem Dam Click here

Green Computing

Green computing is the study and practice of using computing resources efficiently. The primary objective of such a program is to account for the triple bottom line, an expanded spectrum of values and criteria for measuring organizational (and societal) success. The goals are similar to green chemistry; reduce the use of hazardous materials, maximize energy efficiency during the product's lifetime, and promote recyclability or biodegradability of defunct products and factory waste.

Modern IT systems rely upon a complicated mix of people, networks and hardware; as such, a green computing initiative must be systemic in nature, and address increasingly sophisticated problems. Elements of such a solution may comprise items such as end user satisfaction, management restructuring, regulatory compliance, disposal of electronic waste, telecommuting, virtualization of server resources, energy use, thin client solutions, and return on investment (ROI).

The imperative for companies to take control of their power consumption, for technology and more generally, therefore remains acute. One of the most effective power management tools available in 2009 may still be simple, plain, common sense.

Presentation Links

Click here to download the presentation in pdf form

Wikipedia link for Green Computing Click here

E-waste management

Sorry guys this time ........It is not technology related.......I want to make something usefull to our society.I think this topic (e-waste management) is very valuable and make some impression on people since it is simple to present and we must aware about this problem

Electronic waste, e-waste, e-scrap, or Waste Electrical and Electronic Equipment (WEEE) describes loosely discarded, surplus, obsolete, broken, electrical or electronic devices. The processing of electronic waste in developing countries causes serious health and pollution problems due to the fact that electronic equipment contains some very serious contaminants such as lead, cadmium, beryllium and brominated flame retardants. Even in developed countries recycling and disposal of e-waste involves significant risk for examples to workers and communities and great care must be taken to avoid unsafe exposure in recycling operations and leaching of materials such as heavy metals from landfills and incincerator ashes.

Definition

"Electronic waste" may be defined as all secondary computers, entertainment device electronics, mobile phones, and other items such as television sets and refrigerators, whether sold, donated, or discarded by their original owners. This definition includes used electronics which are destined for reuse, resale, salvage, recycling, or disposal. Others define the reusables (working and repairable electronics) and secondary scrap (copper, steel, plastic, etc.) to be "commodities", and reserve the term "waste" for residue or material which was represented as working or repairable but which is dumped or disposed or discarded by the buyer rather than recycled, including residue from reuse and recycling operations. Because loads of surplus electronics are frequently commingled (good, recyclable, and nonrecyclable), several public policy advocates apply the term "e-waste" broadly to all surplus electronics. The United States Environmental Protection Agency (EPA) includes to discarded CRT monitors in its category of "hazardous household waste". but considers CRTs set aside for testing to be commodities if they are not discarded, speculatively accumulated, or left unprotected from weather and other damage.

Debate continues over the distinction between "commodity" and "waste" electronics definitions. Some exporters may deliberately leave difficult-to-spot obsolete or non-working equipment mixed in loads of working equipment (through ignorance, or to avoid more costly treatment processes). Protectionists may broaden the definition of "waste" electronics. The high value of the computer recycling subset of electronic waste (working and reusable laptops, computers, and components like RAM) can help pay the cost of transportation for a large number of worthless "commodities".

Presentation Links

Click here to download the power point presentation

Wikipedia Link for e-waste management Click here

Comments Please !!!!!

PLEASE POST YOUR VALUABLE COMMENTS AND TOPICS THAT YOU NEED RELATED TO Information Technology

BEYOND 3G 4G IS HERE

DEFINITION

4G is the short term for fourth-generation wireless, the stage of broadband mobile communications that will supercede the third generation (3G). While neither standards bodies nor carriers have concretely defined or agreed upon what exactly 4G will be, it is expected that end-to-end IP and high-quality streaming video will be among 4G's distinguishing features. Fourth generation networks are likely to use a combination of WiMAX and WiFi.

Technologies employed by 4G may include SDR (Software-defined radio) receivers, OFDM (Orthogonal Frequency Division Multiplexing), OFDMA (Orthogonal Frequency Division Multiple Access), MIMO (multiple input/multiple output) technologies, UMTS and TD-SCDMA. All of these delivery methods are typified by high rates of data transmission and packet-switched transmision protocols. 3G technologies, by contrast, are a mix of packet and circuit-switched networks.

When fully implemented, 4G is expected to enable pervasive computing, in which simultaneous connections to multiple high-speed networks provide seamless handoffs throughout a geographical area. Network operators may employ technologies such as cognitive radio and wireless mesh networks to ensure connectivity and efficiently distribute both network traffic and spectrum.

The high speeds offered by 4G will create new markets and opportunities for both traditional and startup telecommunications companies. 4G networks, when coupled with cellular phones equipped with higher quality digital cameras and even HD capabilities, will enable vlogs to go mobile, as has already occurred with text-based moblogs. New models for collaborative citizen journalism are likely to emerge as well in areas with 4G connectivity.

A Japanese company, NTT DoCoMo, is testing 4G communication at 100 Mbps for mobile users and up to 1 Gbps while stationary. NTT DoCoMo plans on releasing their first commercial network in 2010. Other telecommunications companies, however, are moving into the area even faster. In August of 2006, Sprint Nextel announced plans to develop and deploy a 4G broadband mobile network nationwide in the United States using WiMAX. The United Kingdom's chancellor of the exchequer announced a plan to auction 4G frequencies in fall of 2006.

4G technologies are sometimes referred to by the acronym "MAGIC," which stands for Mobile multimedia, Anytime/any-where, Global mobility support, Integrated wireless and Customized personal service.

Presentation Links

Presentation Link for 4G Click here to download

Style file in pdf format Click here to download

Wikipedia link for 4G Click here


If you have any problem with links please inform me



Windows 7 most focused OS ?

Windows 7 (formerly codenamed Blackcomb and Vienna) is an upcoming version of Microsoft Windows, a series of operating systems produced by Microsoft for use on personal computers, including home and business desktops, laptops, tablet PCs and media center PCs. Microsoft has stated that it plans to release Windows 7 to manufacturing starting the end of July 2009, with general retail availability set for October 22, 2009, less than three years after the release of its predecessor, Windows Vista. Windows 7's server counterpart, Windows Server 2008 R2, is slated for release at the same time.

Unlike its predecessor, which introduced a large number of new features, Windows 7 is intended to be a more focused, incremental upgrade to the Windows line, with the goal of being fully compatible with applications and hardware with which Windows Vista is already compatible. Presentations given by the company in 2008 have focused on multi-touch support, a redesigned Windows Shell with a new taskbar, a home networking system called HomeGroup, and performance improvements. Some applications that have been included with prior releases of Microsoft Windows, including Windows Calendar, Windows Mail, Windows Movie Maker, and Windows Photo Gallery, will not be included in Windows 7; some will instead be offered separately as part of the freeware Windows Live Essentials suite.

Presentation Links

Presentation link for Windows 7 Click here to download

Wikipedia link for windows 7 Click here

PC World review link for Windows 7 Click here

Ipv6 - The Next Generation Protocol

Definition

The Internet is one of the greatest revolutionary innovations of the twentieth century.It made the 'global village utopia ' a reality in a rather short span of time. It is changing the way we interact with each other, the way we do business, the way we educate ourselves and even the way we entertain ourselves. Perhaps even the architects of Internet would not have foreseen the tremendous growth rate of the network being witnessed today.With the advent of the Web and multimedia services, the technology underlying t he Internet has been under stress.
It cannot adequately support many services being envisaged, such as real time video conferencing, interconnection of gigabit networks with lower bandwidths, high security applications such as electronic commerce, and interactive virtual reality applications. A more serious problem with today's Internet is that it can interconnect a maximum of four billion systems only, which is a small number as compared to the projected systems on the Internet in the twenty-first century.
Each machine on the net is given a 32-bit address. With 32 bits, a maximum of about four billion addresses is possible. Though this is a large a number, soon the Internet will have TV sets, and even pizza machines connected to it, and since each of them must have an IP address, this number becomes too small. The revision of IPv4 was taken up mainly to resolve the address problem, but in the course of refinements, several other features were also added to make it suitable for the next generation Internet.
This version was initially named IPng (IP next generation) and is now officially known as IPv6. IPv6 supports 128-bit addresses, the source address and the destination address, each being, 128 bits long. IPv5 a minor variation of IPv4 is presently running on some routers. Presently, most routers run software that support only IPv4. To switch over to IPv6 overnight is an impossible task and the transition is likely to take a very long time.
However to speed up the transition, an IPv4 compatible IPv6 addressing scheme has been worked out. Major vendors are now writing softwares for various computing environments to support IPv6 functionality. Incidentally, software development for different operating systems and router platforms will offer major jobs opportunities in coming years.

Presentation Links

Click here to download the presentation ...Use MS OFFICE to open

Wikipedia Link for IPV6 Click here

e-Books on ipv6

1. ipv6-tutorial
2. why-ipv6
3. ipv6 complete

If you have any problem while downloading please inform us

Introducing the Google Chrome OS

The history and future of Chrome OS

A little bit of background

For more years than I’ve been alive, there has been a bitter battle between Apple and Microsoft. Over time, Apple realised no matter what they did, it could never hold the monopoly over Windows so the moved onto better things. The iPod was a punch in the stomach to Microsoft, which never thought it would take off.
Since then, Bill Gates has banned Apple products in his house and Microsoft set up an “iPod amnesty”, where employees with an iPod could bin it and replace it with a Zune, Microsoft’s iPod equivalent. Due to the recession, this effort was cut short and Apple still dominates the Zune.

When Google came about nearly a decade ago, Microsoft’s ears pricked up and the company assumed a defensive stance. Google had no interest in targeting Microsoft - it was Microsoft who saw Google as the threat and pummeled everything it had into Live Search, now Bing. Google saw this defensive move as quite interesting, partly hilarious. Up until this point, this was Microsoft threatened by Google, not the other way around.
When the “official news” broke of Google’s operating system via their blog on Tuesday, the blogosphere picked up the story faster than a fat person picking up a cake in a bakery. It was quite an intense spectacle.

Google’s unearthed plans

Google’s plans to roll out a desktop operating system hasn’t been a decision made lightly. After playing with Android, Google’s operating system designed for mobile devices, isn’t bad for a first attempt. But I reckon they have been planning this for a good number of years at least.

Via ReadWriteWeb, Google CEO, Eric Schmidt, wrote an article for the Economist:

In 2007 we’ll witness the increasing dominance of open Internet standards. As web access via mobile phones grows, these standards will sweep aside the proprietary protocols promoted by individual companies striving for technical monopoly. Today’s desktop software will be overtaken by Internet-based services that enable users to choose the document formats, search tools and editing capability that best suit their needs.”

While the potential to delve into conspiracy and “read between the lines” syndrome could be easy in this case, the above quote indicates they were plotting the Chrome OS.
Chrome, the name of Google’s already quite popular browser, is put forward as the name of their operating system. This could well be a significant step towards the future of operating systems. TechCrunch believes a web browser is all you will need when using a computer in the near future. We see this with online office suites, online email, online social communication and online messaging. Why not the OS too?

In my opinion, Google started up as a lowly search engine which took the world by storm. Nobody expected it to do this well, probably not even the initial team. But with the revenues made, the breadth and depth of the service, the brand and everything else involved, they realised how powerful they were. Google, in this respect, will be the next Microsoft.
What next?

Considering Google has only just taken off the beta tags of it’s most popular non-search services, GMail and Apps, some may believe that Chrome OS will be in a public forum soon. But as we have seen with Android, beta testing an operating system isn’t as easy as a web service.
But over the course of Google’s life, it has increasingly been moving in on Microsoft’s turf. With many Google services on the web, this ties in nicely with this operating system. As a relatively bare-bones OS designed for netbooks, based on the Linux kernel, it will be similar to Android in that it will be open source and community driven. What’s more, as a result, it will be free.
I doubt whether this will detract much of the market share away from Microsoft with the upcoming Windows 7 release, but on the non-software front, this will be an interesting couple of years in front of us.
Operating systems are the next battle ground and a Web based operating system is a possibility. Google has a chance. Still, either way, with the mysterious project, codename “Midori”, the next generation of Microsoft’s operating systems could well sweep the floor with other projects on the go at present.
For now, all we can really do is wait and see. It is us, the people, the next generation of techies which will decide.

Saturday, July 11, 2009

DDR 3 RAM Make Your system fast and Furious

In electronic engineering, DDR3 SDRAM or double-data-rate three synchronous dynamic random access memory is a random access memory interface technology used for high bandwidth storage of the working data of a computer or other digital electronic devices. DDR3 is part of the SDRAM family of technologies and is one of the many DRAM (dynamic random access memory) implementations.

DDR3 SDRAM is an improvement over its predecessor, DDR2 SDRAM. The primary benefit of DDR3 is the ability to transfer twice the data rate of DDR2 (I/O at 8× the data rate of the memory cells it contains), thus enabling higher bus rates and higher peak rates than earlier memory technologies. There is no corresponding reduction in latency, as that is a feature of the DRAM array and not the interface.[citation needed] In addition, the DDR3 standard allows for chip capacities of 512 megabits to 8 gigabits, effectively enabling a maximum memory module size of 16 gigabytes.

It should be emphasized that DDR3 is a DRAM interface specification; the actual DRAM arrays that store the data are the same as in any other type of DRAM, and have similar performance.

Presentation Links

use the following links for making the presentation
and use them for analysis also

TORRENT WORKING AND PRINCIPLES

BitTorrent is a peer-to-peer file sharing protocol used for distributing large amounts of data. BitTorrent is one of the most common protocols for transferring large files, and by some estimates it accounted for approximately 35% of all traffic on the entire Internet in 2004.

BitTorrent protocol allows users to receive large amounts of data without putting the level of strain on their computers that would be needed for standard Internet hosting. A standard host's servers can easily be brought to a halt if extreme levels of simultaneous data flow are reached. The protocol works as an alternative data distribution method that makes even small computers with low bandwidth capable of participating in large data transfers.

First, a user playing the role of file provider makes a file (or group of files) available to the network. This first user's file is called a seed and its availability on the network allows other users, called peers, to connect and begin to download the seed file. As new peers connect to the network and request the same file, their computer receives a different piece of the data from the seed. Once multiple peers have multiple pieces of the seed, BitTorrent allows each to become a source for that portion of the file. The effect of this is to take on a small part of the task and relieve the initial user, distributing the file download task among the seed and many peers. With BitTorrent, no one computer needs to supply data in quantities which could jeopardize the task by overwhelming all resources, yet the same final result—each peer eventually receiving the entire file—is still reached.

After the file is successfully and completely downloaded by a given peer, the peer is able to shift roles and become an additional seed, helping the remaining peers to receive the entire file. The community of BitTorrent users frowns upon the practice of disconnecting from the network immediately upon success of a file download, and encourages remaining as another seed for as long as practical, which may be days.

This distributed nature of BitTorrent leads to a viral spreading of a file throughout peers. As more peers join the swarm, the likelihood of a successful download increases. Relative to standard Internet hosting, this provides a significant reduction in the original distributor's hardware and bandwidth resource costs. It also provides redundancy against system problems, reduces dependence on the original distributor and provides a source for the file which is generally temporary and therefore harder to trace than when provided by the enduring availability of a host in standard file distribution techniques.

Programmer Bram Cohen designed the protocol in April 2001 and released a first implementation on July 2, 2001. It is now maintained by Cohen's company BitTorrent, Inc. There are numerous BitTorrent clients available for a variety of computing platforms. According to isoHunt, the total amount of shared content is currently more than 1.7 petabytes.

Presentation Links

This presentation is given on Microsoft's PowerPoint 2007 ....Compatibility maintained with 2003 Click here to download.........special thanks for Vivek Vishnumurthy who created this presentation

Presentation from .net pku click here and save target as .ppt

Wikipedia Link Click here

INTELLIGENT NETWORK

INTRODUCTION

The Intelligent Network, typically stated as its acronym IN, is a network architecture intended both for fixed as well as mobile telecom networks. It allows operators to differentiate themselves by providing value-added services in addition to the standard telecom services such as PSTN, ISDN and GSM services on mobile phones.

In IN, the intelligence is provided by network nodes owned by telecom operators, as opposed to solutions based on intelligence in the telephone equipment, or in Internet servers provided by any part.

IN is based on the Signaling System #7 (SS7) protocol between telephone network switching centers and other network nodes owned by network operators.

History and key concepts

The IN concepts, architecture and protocols were originally developed as standards by the ITU-T which is the standardization committee of the International Telecommunication Union, prior to this a number of telecommunications providers had proprietary IN solutions. The primary aim of the IN was to enhance the core telephony services offered by traditional telecommunications networks, which usually amounted to making and receiving voice calls, sometimes with call divert. This core would then provide a basis upon which operators could build services in addition to those already present on a standard telephone exchange.

A complete description of the IN emerged in a set of ITU-T standards named Q.1210 to Q.1219, or Capability Set One (CS-1) as they became known. The standards defined a complete architecture including the architectural view, state machines, physical implementation and protocols. They were universally embraced by telecom suppliers and operators, although many variants were derived for use in different parts of the world .

Following the success of CS-1, further enhancements followed in the form of CS-2. Although the standards were completed, they were not as widely implemented as CS-1, partly because of the increasing power of the variants, but also partly because they addressed issues which pushed traditional telephone exchanges to their limits.

The major driver behind the development of the IN system was the need for a more flexible way of adding sophisticated services to the existing network. Before IN was developed, all new feature and/or services that were to be added had to be implemented directly in the core switch systems. This made for very long release cycles as the bug hunting and testing had to be extensive and thorough to prevent the network from failing. With the advent of IN, most of these services (such as toll free numbers and geographical number portability) were moved out of the core switch systems and into self serving nodes (IN), thus creating a modular and more secure network that allowed the services providers themselves to develop variations and value-added services to their network without submitting a request to the core switch manufacturer and wait for the long development process. The initial use of IN technology was for number translation services, e.g. when translating toll free numbers to regular PSTN numbers. But much more complex services have since been built on IN, such as Custom Local Area Signaling Services (CLASS) and prepaid telephone calls.

Presentation Links

Click here to download PowerPoint presentations in pdf ......Sorry for the trouble with .pptx...now it is in the .pdf form

Click here to get the presentations page wise text document

Wikipedia Link Click here

IEC e-Book on Intelligent network Click here ....please click right button and Save Target as pdf for download

If you have any problem with download please inform us and post your comments

Intel Core i7 __Fastest Processor On Planet

Intel Core i7 is a family of several Intel desktop x86-64 processors, the first processors released using the Intel Nehalem microarchitecture and the successor to the Intel Core 2 family. All three current models and two upcoming models are quad-core processors.The Core i7 identifier
applies to the initial family of processors codenamed Bloomfield.

Intel representatives state that the moniker Core i7 is meant to help consumers decide which processor to purchase as the newer Nehalem-based products are released in the future. The name continues the use of the Core brand. Core i7, first assembled in Costa Rica, was officially
launched on November 17, 2008 and is manufactured in Arizona, New Mexico and Oregon, though the Oregon plant is moving to the next generation 32 nm process.

Presentation Links

Click here to download original Intel Presentation in pdf format

Wikipedia Link click here

Web 3.0

Web 3.0 is one of the terms used to describe the evolutionary stage of the Web that follows Web 2.0 . Given that technical and social possibilities identified in this latter term are yet to be fully realized the nature of defining Web 3.0 is highly speculative. In general it refers to aspects of the Internet which, though potentially possible, are not technically or practically feasible at this time.

Origin of the term

Following the introduction of the phrase " Web 2.0 " as a description of the recent evolution of the Web, the term "Web 3.0" has been introduced to hypothesize about a future wave of Internet innovation. Views on the next stage of the World Wide Web's evolution vary greatly, from the concept of emerging technologies such as the Semantic Web transforming the way the Web is used (and leading to new possibilities in artificial intelligence ) to the observation that increases in Internet connection speeds, modular web applications , and advances in computer graphics will play the key role in the evolution of the World Wide Web.

Expanded definition

Web 3.0, a phrase coined by John Markoff of the New York Times in 2006, refers to a supposed third generation of Internet-based services that collectively comprise what might be called 'the intelligent Web'—such as those using semantic web, microformats, natural language search, data mining , machine learning, recommendation agents, and artificial intelligence technologies—which emphasize machine-facilitated understanding of information in order to provide a more productive and intuitive user experience.

Nova Spivack defines Web 3.0 as the third decade of the Web ( 2010–2020 ) during which he suggests several major complementary technology trends will reach new levels of maturity ...

Presentation and Style Link

This presentation is given on Microsoft's PowerPoint 2007 ....Compatibility maintained with 2003

Click Here to download

To get more details please download the seminar style for web 3.0 from the following link

Click here to get Style for Web 3.0

Wikipedia Link for web 3.0 click here

How Stuff work link for web 3.0 Click here

If you have any problem with download please inform us