Showing posts with label Future Technologies. Show all posts
Showing posts with label Future Technologies. Show all posts

Friday 12 February 2010

A quick Introduction to M2M Communications

Machine-to-Machine (M2M) communications is a healthy sector that' s expanding rapidly and generating significant revenues for mobile network operators (MNOs). Devices outnumber subscribers by an order of magnitude, but the term doesn' t do justice to the concept and the market it represents.

The following is from 3G Americas report on 3GPP standards and their evolution to 4G:

By leveraging connectivity, Machine-to-Machine (M2M) communication would enable machines to communicate directly with one another. In so doing, M2M communication has the potential to radically change the world around us and the way that we interact with machines.

In Rel-10, 3GPP is in the process of establishing requirements for 3GPP network system improvements that support Machine-Type Communications (MTC). The objective of this study is to identify 3GPP network enhancements required to support a large number of MTC devices in the network and to provide necessary network enablers for MTC communication service. Specifically, transport services for MTC as provided by the 3GPP system and the related optimizations are being considered as well as aspects needed to ensure that MTC devices and/or MTC servers and/or MTC applications do not cause network congestion or system overload. It is also important to enable network operators to offer MTC services at a low cost level, to match the expectations of mass market machine-type services and applications.

The 3GPP study on M2M communications has shown potential for M2M services beyond the current "premium M2M market segment." The example of applications for mass M2M services include machine type communications in smart power grid, smart metering, consumer products, health care, and so forth. The current mobile networks are optimally designed for Human-to-Human communications, but are less optimal for M2M applications.


A study item on M2M communications (3GPP TR 22.868) was completed in 2007; however, no subsequent normative specification has been published. For Rel-10 and beyond, 3GPP intends to take the results on network improvements from the study item forward into a specification phase and address the architectural impacts and security aspects to support MTC scenarios and applications. As such, 3GPP has defined a work item on Network Improvements for Machine-Type Communication (NIMTC). The following goals and objectives are described in the work item:

The goal of this work item is to:
• Provide network operators with lower operational costs when offering machine-type communication services
• Reduce the impact and effort of handling large machine-type communication groups
• Optimize network operations to minimize impact on device battery power usage
• Stimulate new machine-type communication applications by enabling operators to offer services tailored to machine-type communication requirements

The objectives of this work item include:
• Identify and specify general requirements for machine-type communications
• Identify service aspects where network improvements (compared to the current H2H oriented services) are needed to cater for the specific nature of machine-type communications
• Specify machine-type communication requirements for these service aspects where network improvements are needed for machine-type communication
• Address system architecture impacts to support machine-type communication scenarios and applications

A RAN study item to investigate the air interface enhancements for the benefit of M2M communication has also been recently approved. The study will be initiated in early 2010.

Further Reading:

M2M will become really big


It seems kind of odd to call a prediction that an industry segment will reach $18.9 billion an understatement, but in this case it may be so.


This week, Juniper Research pegged the mobile and embedded M2M industry at that amount worldwide by 2014. The press release says that consumer and commercial telematics – vehicle-bound M2M -- will represent more than a third of the total.

Nineteen billion dollars is a lot of money. But even that pot of gold pales in comparison to the promise of M2M. M2M covers smart grid, telematics and a mind-boggling number of other consumer and business services and applications. Indeed, the specter of M2M -- thousands of gadgets talking to millions of widgets -- is one of the reasons that Internet Protocol version 6 is being pushed so hard in some quarters.

Another example of the potential size of the market comes from Berg Insight. The firm says the European M2M module market will grow from 2.3 million last year to 22 million in 2014. Systems under surveillance – alarm systems and tracking devices watched from a monitoring center – will grow from 10 million to 34 million during the same period. The site goes into some detail on the composition of the market.



M2M provides a deeper look into smart meters, the element of the smart grid industry that has been around the longest. The story quotes ABI Research numbers that 76 million smart meters were deployed worldwide by the end of last year. That number will jump to 212 million by 2014. Lux Research, the story says, predicts that the value of the smart grid market overall will grow from $4.5 billion now to $15.8 billion in 2015. The advanced metering infrastructure and smart meters will represent more than $5 billion of that.

The only thing that is certain is that growth will be significant. The dangers of making precise predictions are evident in the recent findings: Juniper says that the mobile and embedded M2M market will reach $18.9 billion by 2014, while Lux says the smart grid market alone will finish 2015 only $3.1 billion short of that figure. One thing that these firms would agree on, however, is that this is a giant opportunity.

You can also read Juniper Research's paper, 'M2M ~ Rise of the Machines' here.

Wednesday 10 February 2010

Google real time speech translation mobile in couple of years

Live language translation on mobile phones could be just two years away, according to search giant Google. The company already offers text translation services and voice recognition, and Franz Och, head of translation services, says that work has already begun on combining the two.

The technology would work by translating phrases rather than individual words, and the company hopes that by looking at the huge amount of translated text already online, it can produce systems that are much more accurate than current versions. “If you look at the progress in machine translation and corresponding advances in voice recognition, there has been huge progress recently,” he said.

With over 6,000 languages spoken around the world, however, and only 52 currently on offer through Google’s existing translations services, the service is some way from meaning that language teaching in schools becomes redundant. “Clearly, for it to work smoothly, you need a combination of high-accuracy machine translation and high-accuracy voice recognition, and that's what we're working on,” said Mr Och.

So far, that is not yet possible, and language experts suggested that seamless technology is currently a distant prospect. David Crystal, honorary professor of linguistics at Bangor University, said the problems of dealing with speed of speech and range of accents could prove insurmountable.

'No system at the moment can handle that properly,' he added.


Tuesday 8 December 2009

Where does mobile go next


Another interesting presentation "Where does mobile go next: lessons from the past, clues to the future" by Professor Joe McGeehan of Toshiba TRL available here.

Tuesday 24 November 2009

Wireless Phone chargers coming in time for Christmas


We have talked about WiTricity and Nokia's self-recharging phones but they seem to be a bit far away.


PowerPad, made by the British gadget firm, Gear4, goes on sale next month and is among a new wave of devices sweeping us towards this unplugged utopia. A protective sleeve slips over an iPhone, slotting into its connecter socket. When the encased phone is placed on a mains-connected pad on, say, a desk or bedside table, electricity makes the jump. American outfits PowerMat and WildCharge make similar devices. Meanwhile, the Palm Pre smartphone has its own "Touchstone" charger and Dell's Latitude Z is the first wireless laptop.

"Wireless electricity is something we used to talk about years ago almost as a bit of a joke when we made predictions about the future," says Michael Brook, editor of the gadget magazine, T3. "To a lot of people it sounds insane that you could even do it – like some kind of witchcraft – but we're seeing a lot of interest in the first wireless chargers. It's going to take off in a big way." If not witchcraft, how does it work? Here's the science: Current from the mains is wired into a transmitter coil in the charging mat. This generates an electromagnetic field. A receiver coil in the phone's case takes the power from the magnetic field and converts it back into electricity that charges the device. By separating those coils, induction charging takes the 150-year-old principle used in the transformers found in most electric devices and splits it in half. No more tripping over laptop leads and their power bricks or diving under your desk to plug in your charger – just put your gadget on the mat and induction takes care of the rest.

But wireless induction, which, in a less-sophisticated form has charged electric toothbrush chargers and some medical implants for years, isn't perfect. Advances mean it's now viable for more demanding devices, but in the case of the PowerPad, it requires a case that adds bulk to what is already a hefty handset. Another drawback is the lack of compatibility – a phone with a PowerPad case will not charge on a PowerMat.

A growing group of electronics firms want to sdeal with the problem. The Wireless Power Consortium (WPC) includes Gear4 and the mobile phone giants, Nokia, Samsung and RIM, makers of the Blackberry. "These companies think there won't be a mass market for wireless charging unless there is a standard," says Menno Treffers, chairman of the consortium's steering group and a director at Philips.

Learning their lesson from the hopeless incompatibility of wired chargers, supporters of WPC's Qi ("chi") standard will put universal coils in devices that will work without cumbersome cases. They'll also be compatible with any charging mat, whether it's on your desk or recessed in a table at Starbucks. Treffers expects the first Qi-compatible devices to hit shelves next year.

But there remains a major flaw in charging mats – their need for proximity. Separation of even a millimetre renders most mats useless. Take your laptop to your bedroom to watch a DVD and you'll need a second mat or a cable. For a truly wireless scenario, electricity must make a giant leap.

Marin Soljacic is a Croatia-born physics professor at Massachusetts Institute of Technology (MIT). In 2002, he got annoyed when his wife's mobile phone woke him up with beeping when its battery ran low. "Not only did I have to wake up to plug it in but had to find the charger in the dark," he says. "I thought, power is everywhere – sockets all over the house – yet it isn't close enough." Soljacic was sure there must be a way to bridge the gap. He wanted his wife's phone to charge while it was still in her handbag. Two years ago, after months of equation crunching and computer modelling, Soljacic literally had a light bulb moment when he flicked the switch of a 60-watt lamp. No big deal except that the electricity powering the light was travelling two metres through thin air.

Soljacic and his team at MIT have since formed a company called WiTricity. Last July, its chief executive, Eric Giler, came to Oxford to demonstrate a wireless television. In front of an amazed audience at a technology conference, he powered up a giant plasma screen TV that had no cables. Electricity sprung from a sleek unit on the floor to a receiver mounted on the back of the screen. Last month, Giler travelled to Japan to show off a wirelessly-charged electric car. "Every time I show people they're blown away," Giler says. "When you see it up close it does appear almost magical."

Soljacic's magic takes the split-transformer model that powers charging mats and adds a key ingredient to make electricity fly. It's called resonance, the phenomenon that means a singer who matches the acoustic frequency of a wine glass can shatter it. Soljacic knew that two resonant objects of the same resonant frequency tend to exchange energy efficiently – imagine a tuning fork causing a nearby fork with the same frequency to chime sympathetically. His breakthrough was to work out a way to use resonance in magnetic form to transfer not sound but electricity. He explains: "By coupling the magnetic field that surrounds a resonant coil to another coil resonating at the same frequency, we can make the electricity hop from one to the other."

WiTricity's strongly coupled magnetic resonance means cars, TVs, free-standing lamps, and computers – anything that requires electricity – can be powered or charged from a central source in the ceiling or under the floor. And it's all totally safe. "The fields that we are generating in are about the same as the earth's magnetic field," Giler says. "We live in a magnetic field."

Giler and his team are in talks with big-name electronics manufacturers, including many of those who are putting their names to the Qi standard for charging mats. Giler says proximity charging is "first-generation stuff; by the end of next year you'll start seeing devices with WiTricity components built in". If he is right, homes and offices could soon be fully wireless. "It's a fundamental breakthrough in science and a game changer for the industry," he says. "Cut the cords and the world's going to change."

Interesting Video:




Sunday 13 September 2009

Scratch Input: Future Input for Mobile Phones




Very interesting...not being able to see past your fingers on smaller devices. That's where "scratch input" comes in. Harrison's prototype uses a digital stethoscope to pick up the sound of scratching on a table or wall. The device attached to the stethoscope, be it a phone, watch or a computer, is programmed to recognise the sounds of different scratch gestures. By tracing a spiral on his desk, Harrison can, for example, turn the volume down on his media player. Ultimately the microphone would be built into the device. Imagine a touch screen watchphone that can be controlled simply by scratching your arm.

You may be interested in reading this article Touch Screens at The Independent here.

Friday 11 September 2009

Ericsson's Exciter: Conceptual mobile Personal Area Mediator (PAM)

Interesting Video:



If you find this interesting, there is a presentation you can look at here. Unfortunately its in swedish but you can get an idea about which direction things will be going in future.

Tuesday 11 August 2009

What is going to be 5G ?



I suppose its pretty pointless to talk about 5G because LTE also known as 3.9G is not yet deployed. Nevertheless, I saw various discussions about 5G in various forums recently.

Just to recap, LTE or 3.9G will take DL speeds upto 300Mbps and UL to around 75Mbps. LTE-Advanced also known as 4G will take the speeds upto 1Gbps in slow mobility scenario and and in high mobility the rates could be upto 100Mbps. So what about 5G?

Well honestly its too difficult to forsee and speculate the rates 5G would provide. In fact 5G may be a completely different paradigm and use another yardstick to measure technological progress. Infact the ITU has moved away from the 'generation' concept and even though its accepted that LTE-A is 4G, its not referred to as 4G by ITU. ITU does not mention 4G anywhere in its next generation acronyms. The term used is IMT-Advanced.

So lets speculate what 5G could be.

I think 5G will be more about reliability and convergence of technologies. Imagine a phone with upto 8 MIMO's and simultaneously its using dfifferent technologies. So a user is connected to the web using multiple technologies and at any instant of time he is getting multiple streams from different sources. When one of these sources fail then the other technology can simply take over and provide the connection. In a simple case if we map this to today's technology then we would have one antenna connected to HSPA, one to WiMAX, one to WiFi, one to UWB, etc., etc.

Hopefully IMS (or another similar technology) will become reality much earlier but even if it doesnt then hopefully by the time 5G will arrive its already operational. Mutiple devices can be connected by the same contact and presence rules can control them. It can help with other services like Messaging and PoC.

Television streaming should become completely seamless and it should be possible to provide 100's of channels without any spectrum limitations.

The cell sizes can become extremely large and cells can have say 80% overlap but there would be no interference. This would even give opportunity to have Femtocell like devices that can provide coverage for say upto a kilometre and it would cause no interference. This should also help get rid of cell-edge rate problem and can help avoid congestion, capacity crunch, etc.

Cloud computing can hit the mobile technology as well and the phones can do amazing things withouth much of the power required. A bit like the 'sixth-sense' technology case where the computation power is in the cloud and the phone is just a device to connect to the web. Infact the cloud concept could be extended where different gadgets can become part of the cloud so maybe your refrigerator can be processing your data if you are near it or maybe you television is if you are near it. The end user should not be aware and shouldnt care as long as his job is getting done.

I suppose when these many features will be available in a technology then the applications can do amazing stuff, only creative developers will be required to come up with new and amazing innvovative ideas.

These are just ideas, please feel free to add yours.

Ps: And ofcourse we would need support to Voice and SMS ;-)

Friday 7 August 2009

Multi-Standards Radio Base Station (MSR-BS) in 3GPP Release 9

I wrote about Future Mobile Terminals earlier which will probably be Multiservice, Multinetwork and Multimode. A similar approach would be needed for the network side. 3GPP is working on Release-9 feature of Multi-Standard Radio (MSR-BS). The 3GPP Spec 37.900 is not yet available but a draft should be available soon.

Research and Markets have already released a report arguing about the benefits of MSR-BS. Last year Ericsson released the RBS 6000 series products that has MSR support. Huawei and Nokia Siemens Networks are also working on similar products under different guises. Martin has blogged about this topic as well earlier in case you want to refer to.

According to Research and Markets report the terms used for this technology is Multi-Standard Radio Base Station (MSR-BTS/MSR-BS), Multi-Mode Radio Base Station (MMR-BTS/MMR-BS) and Multi-Radio Access Technology (Multi-RAT). The name in standards usually is MSR-BS.

So what is MSR-BS? The 3GPP definition is: Base Station characterized by the ability of its receiver and transmitter to process two or more carriers in common active RF components simultaneously in a declared RF bandwidth, where at least one carrier is of a different RAT than the other carrier(s).

In very simple terms, a single Base Station will be able to simultaneously transmit different radio access technologies from a single unit. So a unit may be for example transmitting GSM, WCDMA 2100 and LTE 2600 simultaneously.

The number of technologies supported by a BTS will be an implementation choice. With technology maturing it wont be surprising to have upto 4-5 different technologies in a MSR-BS in the next five years.

The advantage the mobile operator will have will not only be monetary but there will be possibility of space saving. But as the old english proverb says, they will be "putting their eggs in a single basket". If one unit stops working then the coverage in the area goes down. There may not be an option to fallback on different technology.

The way this MSR-BS are implemented will be definitely based on Software Defined Radios (SDR). The advantage with SDR will be that in different parts there is a slight frequency variation for different technologies like GSM-850 is specific to USA whereas the rest of the world uses GSM-900. These small variations will easily be customisable with these MSR-BS and optimisations wont be too far off.

Different Band Categories have been defined for different scenarios. For example Band Category 1 involves deplyment where GSM wont be present. Only LTE and WCDMA is present there. Band Category 2 involves frequency bands where GSM, EDGE, WCDMA and LTE may be present. Band Category 3 is designed with TDD and TD-SCDMA in mind.

More information as and when available

Thursday 28 May 2009

Innovate now or loose market share

Is it wise to invest in this current economic climate was one of the most intriguing question that was flagged up very often in the past one year.

In a recently concluded Cambridge Wireless international event the above topic was discussed at length. Most of the speakers in the event recommended that now is the right time to invest but innovation is the key behind any success rather than investment alone.

There has never been an urgency like today to innovate in order to get out of the current recession and hence build the success for the future. A sensible investment backed with the right focus and indefatigable innovative ideas will no doubt lead us on the road to success and build the next generation wireless world.

Richard Traherne , Director of Cambridge Consultants’ Wireless Division advised delegates at Cambridge Wireless International Conference to innovate now, or lose market share. While speaking to an audience of international business leaders in the wireless communications industry at the Cambridge Wireless International Conference, Richard Traherne said the following:
“To survive in a market like this, it is not enough to stand still. It is critical to have the confidence to be innovative, by which we mean making business out of creativity.”

Key innovation now will certainly help businesses buck the trend in a recession and gain market share. Mr Traherne continued, “Key to this endeavour is to recognise that customers’ needs change in a downturn and so it’s critical to re-calibrate to ensure that they get what they now need, when they need it. There are plenty of examples of companies that grew out of past recessions: Virgin, Apple, Google, to name but three. We are dealing with companies that are being far bolder in the current recession than they would have been in the past, investing hard in technology despite making cuts elsewhere, to ensure that they grow market share and exit the downturn with competitive advantage.”

Most of the business delegates at the event shared the insights into innovative strategies, gained from nearly 50 years in the business of developing breakthrough technology-based products for clients in the medical technology, consumer, transport, cleantech and wireless industries.

To beat the current recession one of the obvious approaches suggested during the event was to reducing product cost but at the same time insisting on other more technologically innovative opportunities. It is very important that the idea regarding the product to enter the market is clear and well defined focus is a must together with the innovation and creativity. The picture below shows one such process as an example:
A company could, for example, seek to achieve premium price positioning by adding new functionality, or it could introduce a novel new service strategy to carve out market share. Another option is to develop an eco-system of partners in different markets to scale business and share risk.

What we are seeing a lot of today, and what is equally recommend even in a growth market, is the selective re-deployment of existing technology in new product applications. The mobile phone manufacturers are a shining example of this, and continue to be so.

The two day conference on 30 April and 1 May 2009, entitled ‘The Future of Wireless’, was conceived to provide a strategic vision of how mobile and wireless markets will develop over the next five years, looking at what technology is likely to deliver, balanced against customer expectations and real-world economic factors.

Friday 15 May 2009

Golden-i: Futuristic Bluetooth Headset with Virtual PC Display

Microdisplay technology maker Kopin Corp. reports it has partnered with Motorola Inc. to introduce a wireless headset with a high-definition virtual display and speech recognition for remote control of things such as smart phones and PCs. Taunton-based Kopin (Nasdaq: KOPN) teamed up with Motorola’s Enterprise Mobility Solutions division to put a 15-inch virtual PC display together with a microphone and earpiece into the headset it calls the “Golden-i.” According to material from Kopin, the Golden-i uses Bluetooth 2.0 to connect to the devices, as well as to Bluetooth-enabled peripherals such as a mouse, touchscreen or keyboard. Golden-i runs on the Microsoft Windows Embedded CE 6.0 R2 operating system, and once connected, users will see their PC desktop screen on the 15-inch virtual display. To control the connected device hands-free, Kopin is incorporating the VoCon3200 software from Burlington-based Nuance Communications Inc. Golden-i also uses Nuance’s text-to-speech application to read back documents, e-mail messages, web content or any text on the display screen. According to Kopin officials, it supports up to 20 languages. If no interface peripheral such as a Bluetooth mouse is available, Golden-i can use its built-in Hillcrest Labs 6-axis, real-time position tracker to allow control of the connected device using head gestures. The device also provides a mini-USB port, and a removable Micro SD card slot taht can support up to 32GB of memory. The target market for Golden-i, according to Kopin, is remote workers looking to quickly connect to a PC or network for information, such as outside sales staff. It is also aimed at network support personnel, as it can support connections to multiple devices, Kopin said.Kopin, which counts the defense industry as a major customer, last December reported it had landed $3.1 million from the U.S. military for displays used in weapon sights. Once connected to a host device, such as a PC, users see their PC desktop screen on the 15-inch virtual display and with Nuance’s VoCon3200 software they can control it using voice commands in a number of languages. Kopin claims this software provides more than 90 percent proficiency straight out of the box, and the more it is used, the better it works. Golden-i requires no push-to-talk buttons and is ready to respond to a user’s request whether in light hibernation or during intermittent use. Golden-i also readily accepts conventional user interface from any host device touch screen, keyboard or wireless mouse and integrates Nuance text-to-speech, enabling Golden-i to read back any text displayed in a number of common languages. Running on the Windows Embedded CE 6.0 R2 platform, Golden-i can remotely wake a PC from practically any location and, when work is finished, the PC can be placed in hibernation with a single spoken command. The headset can also remotely control up to seven other devices or networks at one time, similar to the way users control software applications on a PC desktop. It operates much like a highly mobile server, a hub between various host devices. If a USB interface or removable memory is required, Golden-i provides a mini-USB port and a removable Micro SD card slot capable of supporting up to 32GB. Supported by Texas Instruments’ third generation OMAP dual processor platform, a single 1200 mA/hr li-ion battery should provide more than eight hours of standard use. While the Golden-i can be used just about anywhere, it is designed for “mobile information snacking”, rather than continuous use over long periods. Initial development of the unit has focused on industrial applications, so Kopin is seeking to engage several industrial organizations in several months of in-depth field testing and evaluation. Kopin hopes to incorporate any improvements and refinements uncovered during testing into its Golden-i products, which are expected to be available in 2010. Kopin believes Golden-i will free users from the need to carry a PC or laptop about with them. Freedom from work, though, is another matter entirely. You can read all about the hardware and software details and features of this device here.

Wednesday 13 May 2009

Surround Sound transmission technology from NTT DoCoMo


NTT DOCOMO, INC. announced that it has developed a highly efficient mobile spatial audio transmission technology that enables a mobile phone user to assign a spatial position to each sound source when listening to multiple sound sources, such as during a game or a conference call.

The technology enables a user listening with headphones to, for example, hear each speaker's voice as if it were coming from a unique direction, creating a virtual face-to-face communication environment.

DOCOMO, which is continuing to research and develop the technology for eventual commercialization, foresees applications including mobile conference calls, tele-education and online games.

While existing spatial audio transmission technologies independently process audio encoding/decoding and spatial audio synthesis, the new technology offers a more efficient method by integrating the two processes, thereby minimizing bitrate (or bandwidth) and computation loads suitable for mobile phones and other resource-limited devices.

The processes are collaboratively performed on both the server and client sides. The server identifies the important sound components of each speaker's voice, compresses them efficiently into a single stream and transmits it to the mobile phones. Each phone then decodes the received stream and simultaneously synthesizes spatial audio images

DOCOMO is demonstrating its new spatial audio transmission technology using docomo PRO series™ HT-01A handsets during Wireless Technology Park 2009 at Pacifico Yokohama on May 12 and 13.

Friday 24 April 2009

Innovative Designs and UI is the key to survival for Handset manufacturers


The smartphone segment of the market is poised for growth, just as a range of players are poised to release new smartphone devices in the months ahead. Among the most anticipated are new handsets based on Google's Android operating system, the next iteration of the iPhone, and the Palm Pre.

But as the number of smartphone makers proliferates, the need to create a differentiated product also increases. Much of that differentiation likely will come from the phone's user interface. Unfortunately for those in the market, it's difficult to deliver a phone with a compelling user interface that doesn't mimic all the other devices on the market.

The user interface has to be more than just a pretty face. It has to add value and ease of use for consumers. "It has to be a distinction that consumers value," said Avi Greengart, an analyst for Current Analysis. "Having a prettier set of animated weather cards isn't going to be enough."

Driving innovation may be too difficult a task for OEMs to accomplish in-house, according to John Jackson, vice president of research for CCS Insight. However, there are notable exceptions to this-HTC designed its TouchFlo3D UI in-house, and Samsung has latched onto its proprietary TouchWiz UI as the building block for its smartphones. Nevertheless, many handset makers are turning to outside firms to stay ahead of the innovation curve.

Companies such as TAT and Handmark have built their businesses around working with handset makers and operators on the user interface. TAT CEO Charlotta Falvin claims that her company's offerings sit on 10 percent of all mobile phones out on the market. Falvin said TAT's role in the design of UI is to bridge gaps between the desires and strategies of vendors and operators, a tricky proposition since operators, vendors and independent service providers all want a piece of real estate on the phone--and in consumers' minds.

"Nokia wants it to be a Nokia experience, Vodafone wants it to be a Vodafone experience and Facebook wants it to be a Facebook experience," she said. Success in creating a differentiated UI, however, will not be based around who is the first to market, or who makes the best partnerships, Falvin said, but on "who makes the best experience."

Handmark tries a similar approach. One of its main products is Pocket Express, a cross platform application that gives users access to news, sports, weather, stocks, travel and entertainment applications via a single interface. Wugofski said that the service has 2 million active users.


On the other hand, Daily Wireless argues that innovative designs and thinking out of the box may be key to success for the handsent manufacturers. There are lots of innovation happening around the 'fourth screen'.

OpenPeak has created a ‘fourth screen’ (after tv, computer and cell) for the home. It’s a hub that combines features of the telephone, TV, PC and cell phone into a compact, communications center.

The intuitive navigation menu on the 7? touchscreen makes it easy to make calls, play music, share photos, and organize your household. The device, powered by an Intel Atom processor, features 1GB of built-in storage, WiFi connectivity, an ethernet port, an audio out jack, and USB socket. It runs a cellular-branded version of the OpenFrame software, which appears to be based on Ubuntu linux. It is a wired device (no battery operation).

O2, a large cellular carrier in the UK is offering it to subscribers for £149.99 or free if taken instead of a handset when upgrading or signing a new 18 or 24 month contract. Its being marketed by the name Joggler.

The Verizon Hub is a home phone with an internet-connected base that offers users access to V Cast entertainment services, messaging, and email among other features. It will link up to an Application Store.

GiiNii plans to ship its Android-based portable media player and picture frame in October and January, respectively, according to a spokesperson. Archos announced an Android portable media player for mobile telephony.

Intel is now pushing Moblin V2 Core Alpha for Netbooks which should arrive in beta in May. It will now (apparently) take precedence over Moblin for MIDs, says Linux Devices, which is now postponed until 2010.

The UMPC Portal blog opines that MIDs based on Moblin 1.0, such as the BenQ S6 are being overwhelmed by the popularity of netbooks so abandoned MID developers might instead move to Android or even, gulp, Windows XP.

And ofcourse there are many other devices not mentioned here but please feel free to add them in the comments.

Sunday 19 April 2009

Sci-Fi tech that we are still waiting for

Back in 1986, I used to subscribe to a magazine called '2001' which discussed about technologies of the future. One of the things I remember reading is that by 2001 we would have cars that would run on water and prototype of cars that would be able to fly. We are just starting to see cars that would work on hydrogen and emit water vapour. But its far from car that would work on water. We are still I would say long way away from prototype of flying cars.

IT PRO has an article on top 10 technologies we have been wishing for but not too close to reality yet. Have a look here.

Saturday 11 April 2009

Future Phones will be able to understand your thoughts

Honda is working on a technology for Robots in Japan where they can understand the owners thoughts. Right now only four commands can be understood but the success rate is 90%. If this technology becomes successful it can probably be applied to phones as well.

I remember reading (cant find link, sorry) that NTT DoCoMo has already developed a prototype of phone in which you can speak without any sound and the person at the other end wont even notice. He will hear normal voice.

NTT DoCoMo launched Motion sensing phones couple of years back and the main idea was that the user can control things by motion of their hands. I havent dug into details but I can visualise myself in future working on my laptop and just by waving my hand ask my mobile to start composing a text message. I would be able to dictate the message and just with another wave of my hand, the message will be sent.

Japan has always been the leader of these kinds of technologies and companies out there are working hard innovating new technology. NTT DoCoMo (again) showed off last year a technology where the volume can be controlled just by rolling the eyes. At the moment all these things involve some kind of human attachment which makes them impractical for the time being. In future hopefully there will be better alternatives and more reliable technologies like these.

Anyway, we wont see any of the above technologies anytime soon. There is a funny video on Youtube that you will like about these future technologies that is available below:

Tuesday 17 March 2009

IPHOBAC's advanced photonic technologies: Up to 12.5 Gbit/s @ 60 GHz


With much of the mobile world yet to migrate to 3G mobile communications, let alone 4G, European researchers are already working on a new technology able to deliver data wirelessly up to 12.5Gb/s.

The technology – known as ‘millimetre (mm)-wave’ or microwave photonics – has commercial applications not just in telecommunications (access and in-house networks) but also in instrumentation, radar, security, radio astronomy and other fields.

Despite the quantum leap in performance made possible by combining the latest radio and optics technologies to produce mm-wave components, it will probably only be a few years before there are real benefits for the average EU citizen.

This is thanks to research and development work being done by the EU-funded project IPHOBAC, which brings together partners from both academia and industry with the aim of developing a new class of components and systems for mm-wave applications.

The mm-wave band is the extremely high frequency part of the radio spectrum, from 30 to 300 gigahertz (GHz), and it gets it name from having a wavelength of one to 10mm. Until now, the band has been largely undeveloped, so the new technology makes available for exploitation more of the scarce and much-in-demand spectrum.

It recently unveiled a tiny component, a transmitter able to transmit a continuous signal not only through the entire mm-wave band but beyond. Its full range is 30 to 325GHz and even higher frequency operation is now under investigation. The first component worldwide able to deliver that range of performance, it will be used in both communications and radar systems. Other components developed by the project include 110GHz modulators, 110GHz photodetectors, 300GHz dual-mode lasers, 60GHz mode-locked lasers, and 60GHz transceivers.

Project coordinator Andreas Stöhr says millimetre-wave photonics is a truly disruptive technology for high frequency applications. “It offers unique capabilities such as ultra-wide tunability and low-phase noise which are not possible with competing technologies, such as electronics,” he says.

What this will mean in practical terms is not only ultra-fast wireless data transfer over telecommunications networks, but also a whole range of new applications.


One of these, a 60GHz Photonic Wireless System, was demonstrated at the ICT 2008 exhibition in Lyon and was voted into the Top Ten Best exhibits. The system allows wireless connectivity in full high definition (HD) between devices in the home, such as a set-top box, TV, PC, and mobile devices. It is the first home area network to demonstrate the speeds necessary for full wireless HD of up to 3Gb/s.

The system can also be used to provide multi-camera coverage of live events in HD. “There is no time to compress the signal as the director needs to see live feed from every camera to decide which picture to use, and ours is the only technology which can deliver fast enough data rates to transmit uncompressed HD video/audio signals,” says Stöhr.

The same technology has been demonstrated for access telecom networks and has delivered world record data rates of up to 12.5Gb/s over short- to medium-range wireless spans, or 1500 times the speed of upcoming 4G mobile networks.

One way in which the technology can be deployed in the relatively short term, according to Stöhr, is wirelessly supporting very fast broadband to remote areas. “You can have your fibre in the ground delivering 10Gb/s but we can deliver this by air to remote areas where there is no fibre or to bridge gaps in fibre networks,” he says.

The project is also developing systems for space applications, working with the European Space Agency. Stöhr said he could not reveal details as this has not yet been made public, save to say the systems will operate in the 100GHz band and are needed immediately.

There are various ongoing co-operation projects with industry to commercialise the components and systems, and some components are already at a pre-commercial stage and are being sold in limited numbers. There are also ongoing talks with some of the biggest names in telecommunications, including Siemens, Ericsson, Thales Communications and Malaysia Telecom.

“In just a few years time everybody will be able to see the results of the IPHOBAC project in telecommunications, in the home, in radio astronomy and in space. It is a completely new technology which will be used in many applications even medical ones where mm-wave devices to detect skin cancer are under investigation,” says Stöhr.

You can see their demo here.

Saturday 14 March 2009

Next Generation “Sixth Sense” game-changing wearable tech

TED has this very interesting concept from Pattie Maes’ lab at MIT, spearheaded by Pranav Mistry. It’s a wearable device with a projector that paves the way for profound interaction with our environment. Imagine “Minority Report” and then some...


Its just matter of time after this concept becomes reality for it to be available in mobiles, etc.

Wednesday 17 December 2008

Satellite based Mobile Internet of the future

Background: The current US military satellite communications network represents decades-old technology. To meet the heightened demands of national security in the coming years, newer and more powerful systems are being developed.

Advances in information technology are fundamentally changing the way military conflicts are resolved. The ability to transmit detailed information quickly and reliably to and from all parts of the globe will help streamline military command and control and ensure information superiority, enabling faster deployment of highly mobile forces capable of adapting quickly to changing conditions in the field. Satellite communications play a pivotal role in providing the interoperable, robust, "network-centric" communications needed for future operations.

Military satellite communications (or milsatcom) systems are typically categorized as wideband, protected, or narrowband. Wideband systems emphasize high capacity. Protected systems stress antijam features, covertness, and nuclear survivability. Narrowband systems emphasize support to users who need voice or low-data-rate communications and who also may be mobile or otherwise disadvantaged (because of limited terminal capability, antenna size, environment, etc.).

For wideband communication needs, the Wideband Gapfiller Satellite program and the Advanced Wideband System will augment and eventually replace the Defense Satellite Communications System (DSCS). These satellites will transmit several gigabits of data per second—up to ten times the data flow of the satellites being replaced. Protected communications will be addressed by a global extremely high frequency (EHF) system, composed of the Advanced Extremely High Frequency System and Advanced Polar System. These systems are expected to provide about ten times the capacity of current protected satellites (the Milstar satellites). Narrowband needs are supported by the UFO (Ultrahigh-frequency Follow-On) constellation, which will be replaced by a component of the Advanced Narrowband System



Lockheed Martin Space Systems, Hughes Space and Communications and TRW have formed a National Team to build the Department of Defense's (DOD) next generation of highly secure communication satellites known as the Advanced Extremely High Frequency (AEHF) system.

The Advanced EHF programme provides the follow-on capability to the Milstar satellite programme. It provides the basis for the next generation military communications satellite system, for survivable, jam-resistant, worldwide, secure, communications for the strategic and tactical warfighter. The system replenishes the Milstar constellation in the EHF band.

Each of these Advanced EHF satellites employs more than 50 communications channels via multiple, simultaneous downlinks. Launch of the first AEHF satellite is planned for April 2008 with the second AEHF satellite scheduled for launch in April 2009.

The fully operational Advanced EHF constellation will consist of four crosslinked satellites, providing coverage of the Earth from 65° north latitude to 65° south. These satellites will provide more data throughput capability and coverage flexibility to regional and global military operations than ever before. The fifth satellite built could be used as a spare or launched to provide additional capability to the envisioned constellation.


Current Status: After being plagued with project overruns and a scaling back of the final system, the US military's next generation satellite communications network is another step closer to reality, with completion of the payload module for the third and final Advanced Extremely High Frequency (EHF) satellite.

Although the EHF band is a relatively lightly used part of the electromagnetic spectrum (30-300 GHz), it is for good reason. Atmospheric attenuation is the biggest problem faced in this band, especially around 60 GHz, however the frequencies are viable for short distance terrestrial based communication links, such as microwave Internet and telecommunication links (which already operate in this band). Millimetre wave radar, probably best known as the radar that can see through your clothes but not your skin, also operates in this band.

Designed to avoid problematic frequencies that are more susceptible to attenuation, but accepting increased overall atmospheric attenuation, are an increasing number of military and civil satellite systems that are using this band for uplink and downlink, as well as inter-satellite communication. Inter-satellite communication is really where EHF equipment shines (no atmosphere, small antennas, high data rates).

Civilian systems are currently around the Ku band (Intelsat), providing data rates of up to 2-4 Mbps (14 GHz uplink, 12 GHz downlink) however these rates have still to trickle into everyday user's hands for remote and mobile Internet access. It is more common that an aggregator will access this link/rate and use that to then portion out local Internet access. Systems such as this are in use for remote Australian territories like Cocos and Christmas Islands, and formed the backbone of Boeing's stillborn Connexion in-flight Internet access. High ongoing access costs (basically a share of the overall cost of the satellite) and limited access slots help keep the technology away from everyday use at this time. Militaries and governments around the globe also lease access on these circuits when they need the added capability, with Intelsat and Inmarsat systems being used in the first Gulf War.

Advanced EHF is designed to provide 24 hour coverage from 65 North, to 65 South across the K and Ka sub bands, and when combined with the prototyped Extended Data Rate (XDR) terminals and systems, will offer up to 8.2 Mbps data rates for around 4,000 terminals in concurrent use per satellite footprint (whether that scales to 12,000 systems in concurrent use globally isn't clear from source material).

Within the tri-satellite constellation, inter-satellite EHF links will allow terminals on opposite sides of the globe to communicate in near real-time without the use of a terrestrial link. Combined with smaller, directional antennas and the various options for anti-jamming technology, it represents a significant military capability for the US.

Already plans are being drawn up for the Transformational Satellite Communications System (T-Sat) which will replace Advanced EHF starting sometime in 2013, however it is already facing funding troubles. This could be problematic, with Advanced EHF still struggling to reach capability and the final launch not scheduled until April 2010. Dropping the fourth satellite of the Advanced EHF constellation has been planned to give the USAF time to implement T-Sat more rapidly.

If GPS and remote imaging (think Google Earth) have proven anything, it is that technology initially developed for military purposes, and extremely expensive for initial civil use, will eventually reach the point where it forms part of our daily lives without us ever being conscious of the massive investment to get to that point.

Monday 1 December 2008

Nokia to power Smarter Homes

Nokia Home Control Center - My home is where my phone is

Nokia Home Control Center is a solution based on an open Linux based platform enabling the home owner to build a technology-neutral smart home that can be controlled with a mobile phone, using a unified user interface. Nokia Home Control Center supports the most common smart home technologies, including Z-Wave as well as enabling the incorporation for proprietary technologies. Thus, it allows third parties to develop their own solutions and services on top of the platform, expanding the system to support new services and smart home technologies.

Building blocks for an intelligent house are readily available in the market. Putting it all together is, however, like trying to build a house from blocks that do not fit with each other. There are smart refrigerators, energy-saving washing machines, heating systems that can adjust the room temperature with one-celcius-accuracy, security systems with touchpanels, low-energy walls, programmable thermostats, self-adjusting curtains, configurable set-top boxes, self-operating yard lights and much more. The problem is all these systems are separate and you end up having a dozen remote controllers and miles of cables in the living room.

Until now, solutions to home automation challenges have been sought through the development of better sensor networks. Although they are, of course, very important parts of new smart home solutions, no single sensor network technology can solve the challenges in this field. Z-Wave, ZigBee, and KNX are all attempts to define a common command language for home networks. So far, there has not been a clear winner in the battle for the de facto standard of home networks. Hence, it can be assumed that a future home will use several different technologies.

The Nokia Home Control Center acts as a dictionary that translates different technological languages so that they can be presented in a unified user interface. Furthermore, the platform enables grouping different physical devices, even from different manufacturers, to be presented for the user in an easy-to-understand way.

The whole Nokia solution consists of four main components:

1. The heart of the solution is the Nokia Home Control Center which is built on top of standard gateway architecture.
2. Two most important control nodes are the mobile phone and web browser.
3. The back-end server architecture ensures a seamless and secure link between a mobile device and the home gateway and also makes possible updating and upgrading software easily.
4. The partner devices. In addition to the components that Nokia is providing, the value for the end customer comes from the integration of different third party devices and systems under the control of one user interface.

It will be possible for example to monitor and control electricity usage, to swich devices on and off, and monitor different objects, such as temperature, camera, and motion. On one hand, Nokia Home Control Center can be used as WLAN gateway. On the other hand, the platform covers everything from a basic security solution to a more sophisticated heating control system. Users are free to build a solution that fits to their needs and expand it when ever they want.

Mobility is becoming increasingly important in home environments, as wireless technologies for smart home solutions are emerging. As structure wiring is no longer required, these are no longer niche market products meant for new houses. Wireless broadband has become main stream and multimedia consumption over home networks is increasing. From many studies we know that moving from a multimedia network to a smart home network is a much smaller step than building a wired smart home from the scratch. Finally, the last barrier of high equipment prices is breaking down as the technology becomes more and more common.

Nokia 'Home Control Center' features and technical data can be seen here.

The following is additional info from the press release:

Nokia today also announced a partnership with one of Europe's biggest energy companies, RWE. The co-operation aims at developing a comprehensive solution for managing energy consumption and CO2 footage at home. This cooperation combines RWE's energy competence with Nokia's technological know-how.

With this in mind, the first joint solution from Nokia and RWE on late 2009 will focus on home heating management. The product consists of a central control unit together with remote-controlled thermostats for the actual radiator. The user interface will be the PC and the mobile phone. In addition, a separate display will be available. RWE is also planning special offers combining these devices with new energy supply contracts. In a second step, Nokia and RWE are planning additional services in connection with smart meters beyond 2009. These services will provide consumers with real-time information about their energy consumption and allow them to control their energy bill remotely.

"We are delighted to have secured a world-leading technology partner in Nokia for our range of smart home energy products. Our aim is to offer innovative and affordable energy-efficient solutions for every household that are simple and convenient to operate", said Carolin Reichert, Head of New Business at RWE.

Nokia Home Control Center will be part of Nokia's home offering. The solution will be demonstrated at the Nokia World event in Barcelona, Spain, on December 2-3, 2008 and is expected to become commercially available by the end of 2009.