Home » Archive by category "Intel" (Page 3)
Intel, Latest Technology

The Top Android Apps of 2013

The top Android apps of 2013 have been put together by Google Play users, with more than one million votes cast in the first annual Players’ Choice: Top Google Play Apps and Games poll. There were several unique categories in these user-submitted picks, included Old School Still Cool, Best Game Based on a Movie, and even a write-in category, where players took the time to write and tell Google Play about their favorite app.

Revenue increasing

According to a recent report from Distimo, an app analytics company, 2013 was a particularly good year for mobile and for Android in particular. Here are some of the most pertinent statistics from the report:

  • During 2013, Google Play’s revenue share has been growing at Apple’s expense. In November 2013, the Apple App Store was still leading though with 63 percent relative to 37 percent for Google Play.
  • The top 10 countries in terms of mobile app revenue from the Apple App Store and Google Play are: United States, Japan, South Korea, United Kingdom, China, Australia, Germany, Canada, France, Russia
  • For some apps, the download volumes from the Amazon Appstore started to compete with download volumes in established app stores like the Apple App Store and Google Play.
  • None of newly released apps of 2013 reached a top 10 position in the yearly grossing charts in the Apple App Store. In contrast, four out of the top 10 grossing apps on Google Play were released in 2013.

In terms of greatest revenue growth, Distimo reports that Asian countries are by far leading the way as far as sheer money spent on apps. A recent report from search engine Baidu confirms this, especially in terms of Android apps:

  • There are now over 270 million active daily Android users in China
  • This reflects a 13% overall growth in Q3 2013, as compared to a 55% growth rate in the same quarter a  year ago
  • Most Android device sales (52%) come from users upgrading to new Android phones; 48% are users purchasing a smartphone for the very first time
  • A large part of Android growth (45%) is focused in rural areas and small cities
  • Android owners spend upwards of 150 minutes a day on their smartphones (this is an increase of 26 minutes from the previous year), checking their devices an average of 53 times a day)
  • 44% of Android users in China use Wi-Fi for their access to the Internet, especially for video. 31% get their information from 2G networks, and 23% use 3G.
  • App downloads for Chinese Android device owners are growing exponentially: the average user downloaded 10.5 apps per month in Q3 2013; the previous year, it was 8.2 apps monthly
  • 15% of Android users in China install at least one new app a day vs. 11% in Q3 2012
  • 59%  use app stores to download their apps, while 13% use online app searches and 21% use their PCs to sideload apps onto their Android devices

More revenue within the Google Play store came from freemium apps with in-app purchases in 2013; from 89% to 98%, according to the Distimo report.  The freemium, or “free to play” monetization model, seems to be becoming overwhelmingly successful for most developers, with in-app purchases and level-up opportunities monetizing quite well. One good example of this is a very popular game from the popular TV show “The Simpsons”:

“The Simpsons: Tapped Out is what's known in the gaming industry as a "free to play" game—it doesn't cost anything to play, but if you won't be able to complete every level or collect every item without shelling out real-world currency. Some people are capable of resisting the temptation to pay for exclusive items or boosts or whatever's being sold, but it's not uncommon for players who get seriously hooked to end up investing hundreds or thousands of dollars in an ostensibly free game.” – “The Devious Psychology Behind Free to Play Video Games”, ChicagoReader.com

Popular Android games

Here is the list of games as voted on by Google Play users:

  • Knights and Dragons; voted as the Most Addictive Game of 2013; Knights & Dragons is a non-stop action RPG with endless battles against mythical creatures and knights in one massive action-packed adventure.
  • Bejeweled Blitz, voted as the Best Franchise Game of 2013; “Play for free on your Android as you match and detonate as many gems as you can in 60 action-packed seconds and compete with Facebook friends. Match 3 or more and create cascades of fun with Flame gems, Star gems, and Hypercubes. Add up to three Boosts at a time plus powerful Rare Gems to send your score soaring, and dominate the weekly leaderboards!”
  • The Hobbit; voted as Best Game Based on a Movie for 2013; “Gandalf, Bilbo, Thorin and thousands of players require your help to pacify the Goblin threat in the new combat strategy game from Kabam. Play as an Elf or a Dwarf, build up your city, and destroy the Goblins. But remember - the rift between Elves and Dwarves runs deep!”
  • Duolingo, voted as Best App for Enhancing the Everyday; Learn Spanish, French, German, Portuguese, Italian, and English. Totally fun and 100% free.
  • Movies by Flixster, voted as Best App for Booking and Buying; you can buy tickets, watch movie trailers, even full-length movies on your Android device.
  • YouTube, voted as Best Google App; simple and easy, you can watch videos, connect with other people, and discover content you wouldn’t have otherwise. YouTube just seems to keep getting better after a couple of rough years.

Several apps were written in for inclusion by Google Play users, including Ingress, samurai Siege, NewsHog, and SwiftKey Keyboard.  

As you can see from this list, there are several niche apps here that are doing especially well, and developers are building more and more apps to meet the increasing demand. While app stores don’t always offer the most intuitive search platforms on which to be found, and the landscape is always going to be competitive, the good news is that consumers want more apps, and smart developers will be able to meet this need. The goal of Google Play is threefold: help users find what they’re looking for, help Android ship more apps, and help developers build their brands. Google Play wants developers to succeed at what they do, and they’ve given you the tools to make it happen. For more information on developing for Android, go to the Intel® Android Developer Zone.

Intel, Latest Technology

The Grinch Who Stole Christmas for Target’s Brand and Customers

creditcard

40 million card numbers stolen. Will your firm be the next target?

News broke last week that a major retailer was the victim of a massive theft of customer credit card data, in what is becoming an all too common cadence of data breaches.  Thieves made off with not just the credit card numbers, but also the CVV and expiration dates.  If you listen closely, you can probably hear the machines printing up counterfeit cards.  At this point there has been no precise confirmation of the attack vector used to collect the data – and the gory details may never be known, absent some government action and FOIA request.  But in light of what is likely one of the biggest data breaches in history it makes sense to reflect on some of the Payment Card Industry’s (PCI) best practices for protecting customer data.  PCI is more than just compliance but should be viewed as a catalyst to improve overall network defense and data protection.   In this first post we cover high-level PCI & PII practices that can make a difference- in subsequent posts we will decompose into more detail into how the Target Breach could have been prevented.

Protecting the Brand and the Business

The fallout for Target will surely be in the millions of dollars going beyond fines and reimbursements to consumers but to the damage done to the Target brand. Shoppers entrusted their confidence in Target’s information security practices. While Target is doing a good job of notifying customers of the steps they can take to protect their credit ratings, I have seen several shoppers thinking twice about using their credit cards while doing last-minute Christmas shopping — not just at Target but in other retailers and other industries. Undoubtedly customers are more wary than ever to give out credit card data unless they see that “good house keeping” seal of approval that a merchant has tested their POS and back end systems to the best of their ability. This mistrust will also spill over into e-commerce where consumers where already wary of giving out personal data.

PCI Tools for Credit Card Security

Just because a retailer is PCI compliant, that doesn’t mean they’re 100% secure.  But maintaining PCI compliance will reduce the likelihood of data theft – make it hard enough for an attacker and they will hopefully move on to a more vulnerable target.  Unfortunately the scale of compliance can be daunting for many organizations.  Considering the size of last week’s theft – some 40 million account numbers compromised – it’s clear that high-capacity, high-volume solutions are needed.  Many existing solutions haven’t kept pace with rate of technology changes, leading to implementers being overwhelmed attempting to rip and replace as they move more of their businesses online.

cash register

Remediation for legacy technologies can be challenging


There are six milestones in the Prioritized Approach to Pursue PCI DSS Compliance.  These are necessary but not sufficient for security.  Looking at these milestones, it’s clear that the Gateway Pattern can help a retailer (or anyone processing payment card or any other sensitive data) achieve compliance.  Furthermore, a gateway can allow faster remediation of legacy systems and applications because it can be inserted into the application stack without significant modification to these existing applications.  When new best practices are identified, they can be implemented quickly as well, without being mired in the bureaucracy and expense of custom implementations.  This is essential as companies are moving to the cloud for part or all of their transactions.  Here are some pointers on how a service gateway can shorten the path to PCI DSS compliance:
  • First, remove (i.e. redact) sensitive authentication data and limit data retention.  Thieves can’t steal what isn’t there.  Card verification numbers, PINs, and magstripe data tracks are not to be stored, as those will enable unauthorized use in more locations via card not present and ATM use.  With the increasing dependence on web services for transmitting data across systems and providers, the gateway pattern can help fulfill this requirement by redacting internal fields within a data object (e.g. JSON or XML), ensuring that it isn’t persisted or passed downstream to any application that doesn’t require it.
  • Second, protect the perimeter, internal, and wireless networks.  This becomes much more challenging in today’s distributed environments.  Many payment processing systems use private networks and firewalls to prevent unauthorized access.  However, card readers and cash registers are by necessity at the edge, publicly accessible – once a device inside the trusted network is compromised, it can be leveraged to gain additional access.  Going a step further, application-specific firewalls or gateways can provide additional network security, which feeds into the next requirement -
  • Third, secure payment card applications.  Application level security creates an additional internal perimeter to protect against larger-scale data breaches.  An application-specific firewall or gateway can provide an additional layer of security that protects against both external and internal attacks.  A content attack prevention policy, for example, can limit the spread of an attack that comes from a single compromised system.  By monitoring inbound traffic — even from trusted systems — a gateway can help to prevent a content attack such as a code injection masquerading as a valid call from a compromised payment system to a back-end web service or API.
  • Fourth, monitor and control access to your systems.  Entire books can (and have) been written on this topic, but I’ll highlight a few key benefits of a gateway pattern.  First, a multi-tenant gateway allows an organization to separate responsibilities by job function, meaning that a single person doesn’t need to be granted administrative access to every service.  Additional logging capabilities provided by the gateway can aid in early detection of malicious activity, alerting to suspicious traffic or other patterns that warrant attention.  The gateway can also securely sign the logs to ensure that they haven’t been tampered with.
  • Fifth, protect stored cardholder data.  This goes deeper than simply encrypting the volume where the data is stored.  With the move to cloud storage, data is stored in numerous locations – in replicas and application caches in addition to primary storage.  Best practices suggest using tokenization or record-level encryption to protect cardholder data.  For the credit card numbers themselves, tokenization provides an added benefit of a secure central vault that contains the mapping between card numbers and tokens that can safely be passed between applications.  Randomized tokens have no mathematical relationship to the original cardholder data, so systems that only access tokens are effectively removed from audit scope.  This means that the addition of the gateway layer actually reduces audit complexity and cost!
  • Finally, finalize remaining compliance efforts and ensure that controls are in place, including ongoing verification and maintenance of compliance posture.  This is where the gateway pattern (particularly when it includes tokenization) really shines — by attesting that downstream systems and services never touch cardholder data, the retailer can dramatically reduce the scope of work to be done.  Audit and verification costs time and money (best practices include using an outside specialist), so reduction in scope means less complexity and therefore much less cost.  In addition to the insurance against potential costs from a data breach, scope reduction can save millions of actual dollars in audit.

Summary

Thieves are getting savvy with their attempts to gather cardholder data on all fronts.  Attacks on retailers and banks, while difficult to pull off, present potentially enormous return on investment.  If successful, they also put a tremendous liability on any organization that doesn’t adequately protect their data.  Maintaining customer trust and brand reputation means being a good custodian of data – not just credit cards, but also names, email addresses, and any other personal information.  Gaining and maintaining PCI compliance is a good first step in protecting customer data, and with it the corporate brand.  Using best practices and tools to do so can accelerate the compliance process and reduce the overall cost of staying compliant.  A couple of resources that can help take the next step in using tokenization for PCI compliance:  a Tokenization Buyer’s Guide, and a QSA Security Assessor’s guide that explains how using a gateway for tokenization helps to remove systems from audit scope.

Intel, Latest Technology

Apple’s Top Apps of 2013: Design, Language, and Interaction

‘Tis the season for sharing the best of the year, and the Apple Store have just released their best apps of the year roundup.  While some of their picks are based purely on number of downloads, others are based on aesthetics, interactivity, and levels of user engagement.  These lists are always interesting to peruse in terms of seeing what was trending over the past year, as well as look forward to what we might expect in the New Year.

App of the Year: Duolingo

Languages used to be somewhat expensive to learn; you either had to sign up for a complete course at your local community college or purchase an extremely expensive software package that usually ended up collecting dust under your desk. With the advent of new educational paradigms that aim to make learning accessible to everyone, this is becoming a thing of the past. Duolingo, the Apple Store’s pick for App of the Year, makes language learning available to everyone at just a click of the button:

 “Sure, there was Rosetta Stone, but it's an expensive program (it starts at $274), and the folks who really need to learn to speak other languages "don't have the money," says von Ahn.”They need it to get a better job."

So they put their heads together and came up with Duolingo, which was released at the end of 2012. Twelve months later, Duolingo has seen 10 million downloads. And now Apple has deemed it the free iPhone App of the Year. (The app is also available for Android.)” – USAToday.com

Here’s a video demo of Duolingo and the concept that fueled the creation of this app:

Luis von Ahn, the creator of Duolingo (and, just as an aside, also the creator of CAPTCHA), decided to create Duolingo to not only equip users to learn languages, but also to enable millions of Duolingo users from all over the world to work together to solve large-scale problems; like, for example, translating the Internet. You can watch the talk he gives at TED below about this very issue:

Game of the Year: Ridiculous Fishing

I’ll be the first to admit that I am not a game person; however, I’m sort of drawn in to the Apple Store’s pick for Game of the Year, Ridiculous Fishing. Retro graphics and a compelling story are what keep me and millions of other people around for hours of gameplay.

This somewhat-off-the-beaten-path game was put together by a small independent team of developers out of the Netherlands called Vlambeer, who based this game on their original “Radical Fishing” app: “Follow Billy as he tries to find redemption from his uncertain past. Chase your destiny on the high seas and embark on a heroic quest for glory and gills.” Watch a quick demo of this endearing game below:

The game was recently ported to Android, with the development team writing up a quick summary of their thoughts on the process and how it went, something of interest to other developers who have or who are thinking of doing the same thing with their apps:

“Android (and porting in general) is a particularly tough challenge for a game as precise and meaningful to us as Ridiculous Fishing is, but we’re happy to say we finally reached the point where we felt comfortable releasing the game recently & decided to try something we felt would harken back to our original launch plans. ….This is our first foray into Android, and we’re quite nervous to see how this launch will go.” – A little update regarding Ridiculous Fishing for Android

Trends

People aren’t going straight to the social networking apps and games for their app downloads; although obviously these two categories are far and away the leaders for any app store out there, other apps are definitely making their voices heard (as seen in the App of the Year pick). The Apple Store highlighted trends in 2013 that run from easy photo editing to indie games to innovative kids’ apps, as well as a plethora of apps that spotlight the ability to communicate via split-second video (like Vine or Snapchat).

Top Lists

Minecraft topped the list of Top Paid games for 2013; this indie studio game has made its mark in the world of “crafting” games (games where you create your own reality within the game).  Creator Markus Persson and his game were profiled in the New Yorker:

“Since the game’s release, in 2009, Minecraft has sold in excess of twenty million copies, earned armfuls of prestigious awards, and secured merchandising deals with LEGO and other toymakers. … Persson and his game continue to confound the wisdom of video-game critics, consultants, and publishing mavens. For one, Minecraft looks nothing like the multi-million-dollar blockbusters that usually line GameStop’s shelves; its graphics and sound effects are rudimentary. It is also willfully oblique, with no instruction manual and few explicit goals. At first, you are deposited in a unique, procedurally generated world built from a palette of colored one-by-one square building blocks that comprise its mountains, valleys, lakes and clouds. Faced with this canvas, at first your task is mere exploration, charting the terrain around you.”

Topping the list for Top Free and the Top Grossing Games was the phenomenally successful Candy Crush Saga, the game everyone loves to berate but then secretly plays for hours on their phone (speaking hypothetically, of course). Candy Crush Saga became so successful in part because of its social interactivity; user engagement is one of the primary factors in this game:

“Candy Crush tells players "you need to share this because it benefits your friends."

Players can only play five levels at a time until they run out of 'hearts', meaning they must wait before playing again unless they buy more - but during this time they can give hearts to their friends.

"After a while of helping everyone else," Lovell says: "You think 'I don't mind asking them for help now because I've shared the love, it's my turn [to ask for help and be rewarded]'." - “Freemium app purchases and Candy Crush”, ibtimes.co.uk

It’s almost altruistic in nature, but it also taps into the ubiquitous desire we all have to communicate and engage with those around us – while having fun. And since in-app purchases are in such small amounts, players don’t register that they are giving up anything of value; on the contrary, it’s almost like they are investing in their own game play.

What will we see in 2014?

It’s always difficult to predict trends, but judging from these top apps in the Apple Store as well as what’s going on in the current technology landscape, apps are uniquely posed to continue to lead the way in how users interact with mobile technology – and vice versa. What do you think will be the trends for 2014 in apps? Leave your thoughts in the comments.

Intel, Latest Technology

Quick Start Guides Published for the Intel® Xeon Phi™ Coprocessor Expert User

Hi everyone,

This is a short notice to let you know that two new articles have been published for the Intel® Xeon Phi™ coprocessor:

The target of both of these guides is the expert user. Our assumption is that the expert user does not need to be told what to do, as he already has potentially decades of experience doing his job. Similarly, he does not need to be told how to research his area of expertise as he has done so dozens of times in the past. As these users are new to administering or developing on the Intel Xeon Phi coprocessor, they want to know only where they can find key resources, such as cluster administration guides, technical support and examples.

The administrator

Someone who will administer and support a set of machines (individual/cluster) containing coprocessors. The assumption is that references to the following topics are of most interest to him.

  • Administrative tools and configurations for the Intel® Manycore Platform Software Stack (Intel® MPSS)
  • Technical support services
  • Library support
  • Language support
  • Network infrastructure
  • Installation documentation
  • Cluster administration and FAQ
  • Scripting support

The Developer

Someone who will be programming on an Intel® Many Integrated Core (Intel® MIC) architecture. The assumption is that references to the following topics are of most interest to him.

  • Brief Introduction to the Intel MIC development environment
  • Programming models
  • Hardware architecture
  • Software stack
  • Coprocessor specific drivers and tools – Intel Manycore Platform Software Stack (Intel MPSS)
  • Compilers
  • Libraries
  • Tools
  • Examples and tutorials
  • SW Developer’s Guide
  • Programmer’s Guide
  • Optimization Guide
  • Getting help and other support

If you find something missing, please let us know in the comment section of the articles.

REFERENCES

Kidd, Taylor, “Quick Start Guide: For the Intel® Xeon Phi™ Coprocessor Developer,” http://software.intel.com/en-us/articles/quick-start-guide-for-the-intel-xeon-phi-coprocessor-developer, version 0.1, December 14th, 2013.

Kidd, Taylor, “Quick Start Guide: For the Intel® Xeon Phi™ Coprocessor Administrator,” http://software.intel.com/en-us/articles/quick-start-guide-for-the-intel-xeon-phi-coprocessor-administrator, version 0.1, December 14th, 2013.

Intel, Latest Technology

Augmented Reality and Apps: A Natural Progression

Apps are getting more sophisticated every day. One of the technology factors that are moving this progress forward is augmented reality:

“Augmented reality (AR) is a live, direct or indirect, view of a physical, real-world environment whose elements are augmented by computer-generated sensory input such as sound, video, graphics or GPS data. It is related to a more general concept called mediated reality, in which a view of reality is modified (possibly even diminished rather than augmented), by a computer. As a result, the technology functions by enhancing one’s current perception of reality. By contrast, virtual reality replaces the real world with a simulated one. Augmentation is conventionally in real-time and in semantic context with environmental elements, such as sports scores on TV during a match. With the help of advanced AR technology (e.g. adding computer vision and object recognition) the information about the surrounding real world of the user becomes interactive and digitally manipulated. Artificial information about the environment and its objects can be overlaid on the real world.” – What is Augmented Reality? Freebase.com

There’s no doubt that augmented reality is a game-changer, as this technology gives app developers the ability to layer digital input on top of the real world. For example, perhaps you’re visiting a national park; you could see points of interest and historical trivia pop up on your mobile device as you make your way around the grounds, effortlessly, just by activating an app. Or maybe you’re walking around a store and need to find something that seems to elude you. Instead of spending time trying to track down a harried store employee or trudging up and down the aisle in vain, your mobile device could locate the item via GPS and other sensors.  This will become the norm rather than the exception; according to a report from Juniper Research, which predicts that in 2013 AR apps on mobiles will generate revenues of almost $300 million, and that by 2017 2.5 billion AR apps will be downloaded onto mobiles or tablets every year. This info graphic from Trigg-AR paints a broader picture:

Examples of apps using augmented reality

It’s truly amazing what talented developers are able to come up with when it comes to augmented reality and the total app experience. For example, how about using social elements added on top of augmented reality to take advantage of crowd-sourcing:

“…AR is being used by a start-up company in San Francisco called CrowdOptic that can recognize which direction a crowd of people have their phones pointing. They can then invite others using that app to see what all those phones are seeing. For example, at a NASCAR race, fans that can’t see the entire track could point their phones at a distant turn and get photos and videos gathered by others who are closer to the action.” – Augmented reality is just beginning, LinkedIn.com

The BBC reports on several apps that take advantage of augmented reality to give artistic creations a whole new look; for example, Colorapp is an app that gives anyone the chance to see their drawings come to life; simply color the picture, then use the app to animate your picture in ways you never imagined; another app, called Tocaboca, lets you put yourself in a salon to play a little bit with a new look.

Movies are also getting the augmented reality treatment; a new app tied into the Hunger Games trilogy allows the user to point at a film poster gives “Easter eggs” like new wallpapers, a photo booth with characters from the film, and new tidbits from the movie trailer.

Developers now have the ability to build native apps for Google Glass with the new Glass Development Kit; one of these is called Wordlens:

“Google showed off a few of the first native Glass apps, and one of the coolest among them was Wordlens, a real-time, augmented-reality translation app. Wordlens works much like it does on the iPhone—foreign-language text targeted by the camera is translated on top of the video feed in real time. This is neat on a smartphone, but on a device like Glass it becomes much more powerful. Just by looking at text and saying "OK, Glass, translate this," the text on the Glass video feed is translated and placed above the original text. Wordlens' app uses the accelerometers to keep the virtual text aligned, all while working completely offline.” – Google launches Glass dev kit, ArsTechnica.com

Of course, one of the most natural fits for augmented reality technology in apps has to be shopping. An app called Adornably uses augmented reality layers to help customers shop for furniture, giving users the ability to see what the piece would look like in their own space (a really helpful feature), something that is immeasurably helpful when you’re standing in the furniture showroom. Here’s how the app works:

“To use the app, you start by placing a standard-sized magazine in the center of the room you’re looking to furnish. Fire up the app and it’ll zero in on the magazine and then use those proportions to determine the scale of the room. Once you find a perspective you’re happy with, you can snap a picture at tap of a button.”

You can see a demo of the app below:

Augmented reality as everyday technology

As apps become more and more sophisticated, we should start seeing augmented reality as an expected technology in our “everyday apps”; i.e., the ones we use on a regular basis. One of the first prototypes of this kind of experience was created with MIT Media Lab’s SixthSense Device, conceived and designed through a couple of collaborative experiments at the MIT Media Lab, hangs around your neck and broadcasts computer-generated content onto a wall or any other surface. Users can interact with it, even dial a telephone number projected on their hands, move images with your hands on a wall, and email what you see to someone else.

Basically, the SixthSense device is a wearable PC that transforms any surface into a display screen that is completely interactive with the user, giving them the ability to pull in apps and content whenever and from where ever they need to, and then kicking them away when they’re finished. This isn’t a super sophisticated device; it’s just a webcam, a battery-powered 3M projector with a mirror, and an Internet-enabled mobile phone. Looking at what is currently being done with this project and imagining further implications as developers layer other sensors and capabilities on top of this is mind-boggling.

Augmented reality is taking our apps one step further, making information available to us before we realize we need that. The leader of the SixthSense group at MIT recognizes this need:

"We're trying to make it possible to have access to relevant information in a more seamless way," says Dr. Pattie Maes, who heads the Fluid Interfaces Group at MIT. She says that while today's mobile computing devices can be useful, they are "deaf and blind," meaning that we have to stop what we're doing and tell those devices what information we need or want….We have a vision of a computing system that understands, at least to some extent, where the user is, what the user is doing, and who the user is interacting with," says Dr. Maes. "SixthSense can then proactively make information available to that user based on the situation…. All the work is in the software. The system is constantly trying to figure out what's around you, and what you're trying to do. It has to recognize the images you see, track your gestures, and then relate it all to relevant information at the same time." – BBC.co.uk

What is the future of augmented reality?

App users are becoming increasingly more sophisticated in their expectations of apps. Just a few years ago, touch-enabled apps were seen as the pinnacle of what apps could offer, now this capability is expected as the norm in pretty much any app you download. Can we expect the same from augmented reality? Share your thoughts with us in the comments.

Intel, Latest Technology

Custom API Analytics with Expressway and Splunk

Splunk – An Ancillary Source of API Analytics

Data analytics solutions seem as varied as the data they analyze. However, Expressway users have found tremendous success extending it’s built in API Analytics capabilities with those of Splunk’s – a recognized 2013 Gartner Magic Quadrant Leader for Security Information and Event Management. Intel distributes a free Splunk Application that ingests Expressway’s transactional logs. The application provides in depth dashboards and metrics of message transactions & system utilization. Recently, one of my customers wanted an alternate way to integrate Splunk with Expressway that:

  1. Goes beyond transactional context Expressway Service Gateway’s (ESG) transactional logs provide.
  2. Sends data directly to Splunk from ESG Applications – instead of Splunk ingesting ESG logs.
  3. Does 1 and 2 with negligible overhead.

Coupling Splunk’s ability to ingest “any data from any source” with ESG’s integration capabilities and Intel optimized performance, this was snap.

Integration of ESG and Splunk

ESG_Splunk_Invoke
Splunk offers several options for data input, including files & directories, TCP, UDP, and scripts. ESG’s flexible interfaces easily accommodate a TCP connection (right) to Splunk.

ESG paramaratizes all aspects of an incoming request, both content and context. For API requests this includes:

  • HTTP headers
  • HTTP method
  • HTTP URI segments
  • request size
  • response size
  • response code
  • query parameters
  • inbound IP address
  • processing time
  • specific message content
  • transaction time
  • … any other data …

Sending this data directly to Splunk allows it to generate real-time metrics of ESG’s API utilization.

Customized & Enriched Information

Even a small amount of Expressway data allows Splunk to yield instant yet thorough API analytics.

API Analytics Splunk Dashboards

Splunk’s true value to Expressway users (API providers) come from its ability to easily generate secondary (tertiary, etc.) API analytics. For example, say transactions have a HTTP header whose values represent a unique application identifier. Now statistics (calls per operation, processing time per operation, etc.) can be further delineated by application.

Calls_by_Operation_per_Applicaiton Processing_Time_by_Operation_Per_Application

Analytical permutations become a function of the amount of data sent from Expressway. Splunk’s custom application management does the rest!

Summary

Expresway Service Gateway – API security, high speed policy enforcement, data format & protocol mediation, with applicability across several industry verticals. Now seamless integration with Splunk, capable of proving in-depth transactional analytics – especially around API utilization. Be sure to keep an eye out in Splunk Apps for an Expressway API Analytics application – coming soon!

About Joe Welsh

Joe is a Proof of Concept Pre-Sales Engineer with Intel’s Application Security Software & Datacenter Software Divisions. He joined Intel after working as a Healthcare Integration and Software engineer. Joe’s current focus resides in helping provide integration solutions utilizing Intel’s flagship Expressway gateway security product line that includes: Intel Expressway Tokenization Broker, Service Gateway, and API Management products. When not working, Joe enjoys spending time with his family, the outdoors, music, live sports (soccer and hockey especially), and reading non-fiction.
Intel, Latest Technology

The Floating Holiday

My Dad had a lot of respect for the American flag. I think it was because he was a Korean War veteran. While I was growing up, he had a small poster on the wall of the garage which he called “the flag rules”. It spoke to exactly how to display the flag, how to handle the flag, and how to fold the flag, but the main section showed the days when the flag should be displayed. My Dad followed the flag rules.

Flying the flag makes me think about holidays. In the U.S., we all have the same nine paid holidays each calendar year. Some of those nine aren’t on actual holidays. For example, Independence Day was on Thursday, July 4th but we also got Friday the 5th off as a paid holiday. The nine holidays are shifted a bit every year to make up some good 3- and 4-day weekends.

Now, this is pretty cool: in addition to the nine paid holidays, Intel also offers a floating holiday. What’s a floating holiday? It’s a holiday that YOU choose. Some employees have special days that they want to take off of work to celebrate or recognize personal, religious, patriotic or historical people or events. What if Intel didn’t pick those employees’ special day as one of the nine? In the past, employees would have to take a vacation day. Now? Answer = floating holiday.

You just have to get your manager’s agreement that you can take off that particular day, and -  boom – you just made your own holiday. It turns out that one of the days when the flag should be displayed is June 14th, Flag Day. My Dad flew the flag every Flag Day and I have upheld that little tradition in my family. A lot of courageous people paid the ultimate cost for our Flag, so choosing a day to honor it seems more than appropriate. In 2014, I’m thinking of taking Flag Day as my floating holiday to honor the Flag and my Dad. And because I work at Intel, I will get paid as I keep calm and flag on.

Intel, Latest Technology

Innovation, Convenience, and Access: A quick look at six apps

There’s no question: apps and the mobile ecosystem in general are completely changing how we interact with the world. Think that statement is a bit bold? Think back ten years ago to what you would need to take on a family camping trip to Yellowstone Park: a bundle of maps, a camera, a separate video camera, CD player, flash light, paper to write on, books to read, DVDs to watch…now, you only need a tablet or smartphone to accomplish everything that all these devices did separately, and more.

Apps and education

The education genre is becoming more dynamic than ever before in history with the advent of apps that are innovating how we impart knowledge. The Apple Store’s pick for the top free iPhone App of the Year, Duolingo, reflects this shift in thinking:

“Languages used to be expensive to learn: “Sure, there was Rosetta Stone, but it's an expensive program (it starts at $274), and the folks who really need to learn to speak other languages "don't have the money," says von Ahn.”They need it to get a better job."

So they put their heads together and came up with Duolingo, which was released at the end of 2012. Twelve months later, Duolingo has seen 10 million downloads. And now Apple has deemed it the free iPhone App of the Year. (The app is also available for Android.).” – Duolingo Apple iPhone App of the Year, USAToday.com

From free language learning to apps that let you tap into the vast reserves of Wikipedia to learning math, engineering, and computer science via Khan Academy’s mobile app, the world is literally your oyster via education apps.

Apps and interaction

We’re all familiar at this point with the plethora of social networking apps on the market, with new variations being uploaded to various app stores hourly. There seems to be no end to the way we can communicate with the people around us both near and far. However, there is one app that takes that need to communicate and makes it more intimate:

“….Millions of people use Whisper and it is approaching 3 billion monthly page views. On average, people spend more than 20 minutes per day with Whisper, checking its content eight to ten times per day. Whisper has raised $25 million from early Snapchat investor Lightspeed and others.

The people who are spilling their guts on Whisper fall between ages 17 and 28. Heyward says less than 4 percent of his users are under the age of 18. The vast majority of its users—70 percent—are women. 

The reason Whisper gets so many people to share things they'd never say out loud is because everything is posted anonymously. In the past, anonymous social networks have been nasty places. Just look at the comments on YouTube…” Whisper app gets 3 billion monthly page views, Slate.com

There’s no doubt that social networking apps – and communication in general via apps – will continue to evolve and explode at a phenomenal rate; humans are social creatures and anything that makes the process of connecting to others in a meaningful way is bound to get attention. Whisper stands out because it enables more intimate communication without losing anonymity, a feature that more social apps could use.

Apps and health

One of the most obvious topic areas where more apps are needed for greater flow of information would be the medical field. Imagine if you could access all your medical records with just the touch of a button, and not only that, but pay bills, co-pays, see what your plan looks like, contact member services, order prescriptions, etc. One app that is tackling the medical records issue is called iBlueButton:

“Relaxing after dinner, Dr. Mostashari, then President Obama's national coordinator for health-information technology, signed them up for a new app called iBlueButton that lets people access their Medicare records via smartphone.

The next morning, his father complained of severe eye pain. "I thought, 'It's the Friday after Thanksgiving. We'll be spending the whole day in the ER,'" Dr. Mostashari recalls. Then he remembered iBlueButton, which showed that a cataract surgeon had diagnosed a dry-eye condition, enabling Dr. Mostashari to get his dad the appropriate medication before noon.” – Next in Tech: App Helps Patients Track Care, Wall Street Journal

As our information moves to the digital format, apps that help us access medical information will become the norm; we’ll be able to access our data and make more use of it than just at a six-month checkup.

What about fitness? This time of year many of us are thinking about New Year’s resolutions, and one of the most perennial favorites is getting healthy. There are a wide variety of apps that can help users in their journey to fitness, including one called Runnit that actually gives you goodies for exercising:

Runnit for iPhone wants to reward you with discounts, free products and “exclusive content” every time you pound the pavement. The app is available to download in the UK only for now – but a US launch is on the cards for early 2014.” – Runnit iPhone app rewards running, TheNextWeb

Apps and shopping

Shopping is definitely something that has changed for the better with mobile apps. For example, this year instead of fighting crowds, I did the vast majority of my holiday shopping on my phone sitting in front of the television, with delivery free to my house within three days. Convenient? Yes. Potentially addictive? No comment. There are a lot of apps out there that make this entire process easier, but there are a couple that are taking it one step further. One of those is GroceryTrip:

“GroceryTrip makes building grocery lists from recipes and other webpages and documents you've clipped to Evernote easy. The app takes your tagged items, consolidates duplicates, sorts items by aisle or section of the store, and more.” – GroceryTrip is an Evernote-powered grocery list app, Lifehacker.com

There’s also Drync, an app that makes it easier to track down that delicious wine you can never remember the name of:

“The app called Drync is a cross between Cellar Tracker and a wine shop; like Shazam, it allows the user to buy a wine immediately upon liking it and identifying it, whether it be at a dinner party or a wine bar….From within the app, take a picture of a bottle label (or use the less fun, more tedious predictive type feature) to find the wine. Using image recognition technology, the app identifies the label from its 1.7 million bottle database and then cross-references it with the current inventory of Drync's retail partners. If that wine is in stock, you're in business. If not, the app will store your selection and automatically notify you when it's available.” – Drync, The Village Voice

Apps are changing the way we interact with the world

It’s fascinating to see the wide variety of innovations that developers are coming up with for apps in every imaginable genre. As we continue to grow the app ecosystem, apps will continue to evolve the way that people are interacting with each other, their environment, and the global economy.

Intel, Latest Technology

APIC Virtualization Performance Testing and Iozone*

Introduction

Virtual machine monitors (VMM) emulate most guest access to interrupts and the advanced programmable interrupt controller (APIC) in a virtual environment.  They also virtualize all guest interrupts. These activities require the exit and reentry of the virtual machines (VM), but they are time consuming and are a major source of overhead.  in order to minimize that effect, Intel(R) new processors, Xeon(R) Et v2, emulate those activities in the hardware.  This new feature is called APIC virtualization (APICv).  More information can be found at [6] and in chapter 29 of [7].

                            Figure 1 - VM-VMM interaction with and without APICv

Figure 1 shows that all virtualized activities relating interrupts and APIC to and from the guest OS have to go through VMM in systems without APICv.  They are executed in the hardware for systems with APICv, not in the VMM.  This way all activities can stay inside the VM, thus eliminating the need to issue the "VM exit" command resulting in reducing overhead and increasing throughput.

In his blog we will test this new feature to see if it improves the throughput and how it affects the CPU utilization.

Terminology

Advanced Programmable Interrupt Controller (APIC) is programmable interrupt controller (PIC) that can handle interrupts from multiple processors.  More information about PIC can be found here[2].

Fread is reading a file using the function fread().  More information about this can be found here[8] in the download documentation link.

Fwrite is writing a file using the function fwrite().  More information about this can be found here[8] in the download documentation link.

Guest Operating System (OS) is an OS that runs in a VM.  More information about the guest OS can be found here[5].

Pread is reading from a file at a given offset.

Pwrite is writing to a file at a given offset.

Random Read is reading from a file with access being made to random locations within the file.  More information about it can be found here8] in the download documentation link.

Random Write is writing to a file with access being made to random locations with the file.  More information about it can be found here[8] in the download documentation link.

Virtualization is a way to run multiple independent virtual operating systems on a single physical computer.  More information about it can be found here[1].

Virtual Machine (VM) is a piece of computer software, firmware or hardware that creates and run VMs.  More information about it can be found here[4].

The Test

Iozone v3.42 was chosen to test the file I/O performance of a system. The beta version of the 64-bit Enterprise edition of Redhat* 7 was used with kvm* installed on a system with two pre-production Intel(R) Xeon(R) E5-2697 v2 processors and 64GB of RAM.  We created twelve VMs, and each VM had the same OS on it: beta version of 64-bit Enterprise edition of Redhat 7.

On each VM, Iozone will perform the test on a 100MB file with record size of 64KB.

The following steps were used to collect file I/O activities and CPU utilization:

1) Enable APICv feature in kvm

2) Turn on only one VM

3) Ssh to execute Iozone an all running VMs

4) At the main console, run the command sar (system activity report) to collect the percentage CPU utilization.

5) Increase the number of running VMs by one and repeat the above steps starting from step 3

6) Repeat until all VMs created are running

7) Disable APICv feature in kvm

8) Repeat the above steps starting from step 3 through step 7

Note that the results can vary from system to system depending on many factors like the record size, file size, type of data disk and so on.

Results

Figure 2 - Percentage Percentage CPU utilization with and without APICv


Figure 2 shows the CPU utilization increase ranging from 0.26% to 4.98% when compared to systems with and without APICv features. This makes sense in systems with APICv. They have requests activities served within VMs without going out to VMM. The result is less overhead.  This translates into more requests being ready to execute within the same time frame, compared to those of systems without APICv resulting in more CPU activities.  We want to figure out which components in the CPU utilization that causes it to increase in systems with APICv.  Further analyzing the components of the CPU utilization revealed the values of the iowait component in systems with APICv are lower than those in system without APICv as can be seen in figure 2a.  Iowait is the portion of time the CPU is idle during which the system has an outstanding disk I/O request.  Since more requests were being executed with systems equipped with APICvFigure 8.

~~Figure 2a - Percentage of iowait with and without APICv

~~Figure 3 – Throughput of random read with and without APICv
With APICv the random-read throughput increase ranging from 0.29% - 2.73%

~~Figure 4 – Throughput of random write with and without APICv
With APICv the random-write throughput increase ranging from 0% - 8.51%

~~Figure 5 – Throughput of pread with and without APICv
With APICv the pread throughput increase ranging from 0.52% - 9.3%

~~Figure 6 – Throughput of pwrite with and without APICv
With APICv the pwrite throughput increase ranging from 0.8% - 3.44%

~~Figure 7 – Throughput of fread with and without APICv
With APICv the fread throughput increase ranging from 0.08% - 1.39%

~~Figure 8 – Throughput of fwrite with and without APICv
With APICv the fwrite throughput increase ranging from 0.73% - 4.18%


Conclusion
APICv will help improve the throughput due to less overhead since activities relating to APIC are done in the hardware, not in VMM.  APICv also increases CPU utilization resulting in less idle time for servers. 

References
[1] http://en.wikipedia.org/wiki/Virtualization
[2] http://en.wikipedia.org/wiki/Programmable_Interrupt_Controller
[3] http://en.wikipedia.org/wiki/Virtual_machine
[4] http://en.wikipedia.org/wiki/Hypervisor
[5] http://searchservervirtualization.techtarget.com/definition/guest-OS
[6] http://www.linuxplumbersconf.org/2012/wp-content/uploads/2012/09/2012-lpc-virt-intel-vt-feat-nakajima.pdf
[7] http://www.intel.com/content/dam/www/public/us/en/documents/manuals/64-ia-32-architectures-software-developer-system-programming-manual-325384.pdf
[8] http://www.iozone.org/

Intel, Latest Technology

Be Prepared for the Future of Networking

The future is never easy to predict, especially in terms of technology innovation. While some theorized the possibility of an Internet connecting computers, no one could fully grasp the impact the “network of networks” would have in our communications, economy, culture, and way of life. But if there’s one thing that’s clear about the future of networking, it’s that the movement toward cloud computing and a greater aggregation and scale of resources in the network is the direction the industry is moving. I believe that in the not-too-distant-future, everything that computes will connect, everything that is connected will be computing. As a consequence, the always on, always connected nature of devices will be a facet of our digital life. For more detailed information on this topic, tune into the Network Transformation podcast series.

Solutions on the horizon
In order to support this continual increase in devices and connectivity, our networking infrastructure will need to be much more flexible in terms of provisioning different resources than it is today. Compute, networking, and storage assets or resources will need to be made available to the consumer of services irrespective of geographic location and time of day. As a connected citizen, the network will become your personal cloud. Your ability to watch Hi-Definition video on the right device at the right time will be enabled by a network that responds to that need. This will be made possible by a network where services are managed and provisioned seamlessly, and resources dynamically reallocated based on demand. Where it’s not required, resources will be scaled back such that bandwidth is best utilized and cost optimized. This flexibility and optimization will mean that networks will conserve limited resources (namely bandwidth, and electricity) making them more sustainable, both from a business and environmental perspective.

On a network level, designing for efficiency and sustainability has a number of built-in benefits. On the hardware side, you can increment capacity by adding more functions on physical hardware; for example, adding blades to an existing chassis or upgrading those blades dependent on their need to grow capacity, performance, or capability. That’s an upgrade path which doesn’t require changing the entire chassis. On the software side, a network that embraces more software-based functions is a much more reusable, sustainable, and scalable model. Rather than use a TDM to IP gateway, which is largely a fixed-function hardware device today, network designers can look to use standard computing platforms, where the switching function or gateway function is provided through software. As a result, the same hardware platform can be retargeted to have a completely new function based on a new software load. To handle the proliferation of high-bandwidth services, like HD video, companies can design systems to avoid the constant backhauling of video from central locations.

What you need To know
To best prepare for the future today, service providers can start to deploy new infrastructure builds on modular, flexible platforms rather than sustaining current systems unable to evolve into this new model. The primary opportunity is to embrace software-defined infrastructure, because it allows service providers to keep pace with both standards definition and the evolution of new technology development. Transformation of services is truly achieved when the network is entirely flexible and resources are configurable. Secondly, service providers should really embrace and support the growth of the supplier ecosystem. The broader the base of suppliers, the greater the opportunity for service providers to achieve true economies of scale.

The movement towards more cloud service provision is certainly permeating all facets of what we’ve come to know as connectivity and communications. We’re beginning to see an industry driven by service; driven by the user experience. Telecommunications has traditionally been a slow moving industry, so all this is quite new. We have to be realistic about the timeline for change, particularly in telecoms which has traditionally taken decades or more to embrace, introduce, and deploy new technologies on a mass scale. But there are innovative companies today who are embracing new technologies, starting to virtualize functions, and deploy them in standard platforms. That work is already underway and it will be exciting to see the result.

Intel, Latest Technology

Benefits of Solid-State storage technologies in the Cloud

Summary

Solid-state storage technologies applied in solid-state drives (SSD) have rapidly evolved over the last few years, enabling higher capacity devices and even greater reliability. SSDs now are used for caching and other purposes in the data center and in larger system applications, including technical computing where massive data sets (big data volume-variety-velocity) are common.  This blog gives a brief description of the difference between SSD and hard disk drive (HDD), shows the relative latency improvement over the last few years for Intel® SSD, shows the potential uses of SSDs in the cloud environment, and provides some general Linux guidelines to fully utilize Intel SSDs.

What is the difference between SSD and HDD?

The main difference between SSD and HDD is shown in Figure 1.  Imagine HDD is like record player where the data are stored on track and it requires a mechanical arm to get on the right track to retrieve that data.  For SSD, the data are stored on a microchips which can be retrieved digitally. The access on SSD is much faster and it does not have any moving parts that can reduce the latency for data reads and writes.  In the SSD, there is a housekeeping process where a built-in algorithm checks and refreshes the flash (NAND) cells to keep performance predictable (the whole process is referred to Quality of Service – QoS).  For the reads, the housekeeping is less than for the writes.

Figure 1 - Benefits of using SSD

Relative Improvement on Intel SSDs

Over the last 30 years, the most significant change in relative latency and bandwidth in a storage device comes from NAND (type of flash memory) which is currently used in the Intel SSD.  In Figure 1, the red line indicates the transition from a mechanical device to a SSD. In this transition, the Intel SSD improved latency by 2 orders of magnitude and bandwidth by one order of magnitude.  Complete performance details can be found in the Intel® SSD DCS3700 Series specification:

  •  Read and Write IOPS1,2 (Full LBA Range, Iometer* Queue Depth 32)
    • - Random 4 KB Reads: Up to 75,000 IOPS
    • - Random 4 KB Writes: Up to 36,000 IOPS
    • - Random 8 KB Reads: Up to 47,500 IOPS
    • - Random 8 KB Writes: Up to 20,000 IOPS
  • Bandwidth Performance
    •  - Sustained Sequential Read: Up to 500 MB/s
    •  - Sustained Sequential Write: Up to 460 MB/s
  • Latency (average sequential)
    • Read: 50 µs (TYP)
    • Write: 65 µs (TYP)

Figure 2- SSD Transformation

In the Intel Data Center Family of SSDs performance increases as the workload moves from more writes to reads due to the NAND process and the “housekeeping” activities associated with the write process (see Figure 4).

Figure 3 Performance vs. Density

Intel SSDs performance increases as the workload moves from more writes to reads due to the NAND process, “housekeeping,” with the writes (see Figure 4).

Figure 4 Read/Write Mix

Another important thing to know is that the Intel SSDs performance increase as the queue depth increases.  Queue depth is the number of pending input/output (I/O) request for a given logical drive.  Intel SSDs have a maximum queue depth of 32 (specified by the SATA protocol) which means the drive can process more in parallel (see Figure 5).  With a mechanical drive, large queue depths can be used to measure device overload whereas in an SSD deep queuing can be beneficial. SSDs have very low latency and respond almost instantly to I/O requests.  Intel SSDs drives can go up to 75k IOPS for the reads and 36k IOPS for the writes for random 4k block aligned workloads.

Figure 5- Queue Depth

With the growth of both features and capacity, the potential usage of SSDs in the cloud and data center applications is growing..

Potential Usage of SSD in the Cloud and Data Center Applications

Due to the higher $ per GB cost of SSD and its high I/O operation per second (IOPs), SSDs are best used in workloads that benefit from the high random IOP capacity the drives exhibit.  One of these workloads might be where data caching is required.  In Figure 6, the front-end web server and back-end storage will have the data caching functions where SSDs can be applied to reduce the traffics and requests before going out to the network.

Figure 6 - SSDs in the Cloud

For applications such as databases, content delivery, and email servers, SSDs offer faster response times than traditional disks and can be used as “proximity storage” to reduce the requests over the network or storage network (see Figure 7).   Proximity storage is a local storage which provides some sort of data caching or storage tiering function for the local servers.   .

Figure 7 - SSD Best Fit for Data Center Applications

How to take advantage of the Intel SDD?

For most general workloads, the use of a 4k block size and the IO scheduler in Linux set to NOOP will give the optimum performance on all of Intel SSDs.  Other workloads, not covered in this blog, may require further testing and analysis to determine the most efficient OS parameters.  Here are some general guidelines to take advantage of all Intel SSDs after installing the operating systems:

  1. Change the value for IO Scheduler to NOOP
  2. Align the block to 4K or 4096KB boundary

An example of how to set the IO scheduler in Open Solaris* (this setting will NOT survive a reboot):

># cat /sys/block/sdb/queue/scheduler
noop anticipatory deadline [cfq]
># echo noop > /sys/block/sdb/queue/scheduler
noop is the optimum value for 4k random workloads

An example of how to set the partition in RedHat* edition of Linux: Partition using fdisk and appropriate values for block sizing and options

>use option –b when partition,
>make sure use option “c” to turn off DOS compatible
>and “u” to change units to sectors

  1. >> fdisk –b 4096 /dev/sdX
  2.     c ---- turn off DOS compatible
  3.     u --- change units to sectors

 Conclusion

With explosion of big data, data center applications and cloud infrastructures will need to process an ever increasing volume of data and requests from their users.  This brief overview of the Intel SSD features and the potential usage of SSDs in the data center can help developers plan for this type of storage capability as it becomes more common.  Finally, Intel SSDs can help cloud infrastructure and data center applications meet those growing user demands and achieve better response times simultaneously.

Intel, Latest Technology

API Management – Anyway you want it!

- By Andy Thurai (@AndyThurai) and Blake Dournaee (@Dournaee).

This article originally appeared on Gigaom.

Enterprises are building an API First strategy to keep up with their customer needs, and provide resources and services that go beyond the confines of enterprise. With this shift to using APIs as an extension of their enterprise IT, the key challenge still remains choosing the right deployment model.

Even with bullet-proof technology from a leading provider, your results could be disastrous if you start off with a wrong deployment model. Consider developer scale, innovation, incurring costs, complexity of API platform management, etc. On the other hand, forcing internal developers to hop out to the cloud to get API metadata when your internal API program is just starting is an exercise leading to inefficiency and inconsistencies.

Components of APIs

But before we get to deployment models, you need to understand the components of API management, your target audience and your overall corporate IT strategy. These certainly will influence your decisions.

Not all Enterprises embark on an API program for the same reasons – enterprise mobility programs, rationalizing existing systems as APIs, or find new revenue models, to name a few.  All of these factors influence your decisions.

API management has two major components: the API traffic and the API metadata. The API traffic is the actual data flow and the metadata contains the information needed to certify, protect and understand that data flow. The metadata describes the details about the collection of APIs. It consists of information such as interface details, constructs, security, documentation, code samples, error behavior, design patterns, compliance requirements, and the contract (usage limits, terms of service). This is the rough equivalent of the registry and repository from the days of service-oriented architecture, but it contains a lot more. It differs in a key way; it’s usable and human readable. Some vendors call this the API portal or API catalog.

Next you have developer segmentation, which falls into three categories – internal, partner, and public. The last category describes a zero-trust model where anyone could potentially be a developer, whereas the other two categories have varying degrees of trust. In general, internal developers are more trusted than partners or public, but this is not a hard and fast rule.

Armed with this knowledge, let’s explore popular API Management deployment models, in no particular order.

Everything Local

conceptarch_01v2

In this model, either software or a gateway that provides API metadata and traffic management are both deployed on-premise. This could either be in your DMZ or inside your firewall. This “everything local” model gives the enterprise the most control with the least amount of risk. This is simply due to the fact that you own and manage the entire API Management platform. The downside to this model can be cost. Owning it outright might cost less in the long run, but the upfront cost of ownership could be higher than other models because your Enterprise needs the requisite servers, software, maintenance, and operational expertise. However, if the API platform drives enough revenue, innovation and cost reductions, the higher total cost of ownership (TCO) can be justified with a quicker return on investment (ROI). This model serves internal developers best and helps large Enterprises that want to start with ownership and complete control of their API management infrastructure that can be eventually pushed out to a SaaS model.

Virtual Private Cloud

conceptarch03

In this model, either software or a virtual gateway is deployed in a virtual enterprise network such as an isolated Amazon private cloud or virtual private cloud (VPC). Depending on the configuration, the traffic can either come to the DMZ or go directly to the private cloud. The traffic that comes to the enterprise DMZ can be forwarded to VPC and the VPC direct communication can be enforced based on enterprise governance, risk and security measures. A VPC deployment may be ideal for trusted internal developers and partner developers, and allows the Enterprise to experiment with elasticity. The VPC model with multi-homed infrastructure also allows API metadata to be accessible from the Internet, but done with a soft-launch and not a big-bang. As partners grow, the infrastructure can scale in the private cloud without the need to advertise the API metadata to every garage developer out there. This option gives the enterprise similar control as the local datacenter model deployment, but with a slightly elevated risk but more elasticity.

Hybrid SaaS

conceptarch02

In this model, the API traffic software/gateway is installed on-premise but the developer onboarding and public-facing API catalog (or portal) is deployed in a public SaaS environment. Though the environments are physically separated from each other, they are connected through secure back channels to feed information in a near-real time basis. Communication includes information flow from the API management catalog to the API traffic enforcement point which includes API keys, quota policies and OAuth enforcement. The API traffic management pushes traffic analytics, statistics, and other pertinent API usage information back to the SaaS public cloud.

This model provides for a good developer reach and scale, as developers can interact in a shared cloud instance while keeping the traffic flows through the enterprise components. Also, this model allows you to have a split cost model; the API metadata is charged as a service (without a heavy initial investment) and the data flow component is a perpetual license, giving the enterprise a mix of both benefits. The API traffic can still come to the enterprise directly without a need to go to the cloud first which will let the enterprise use components, thereby reducing some of the capital expenditure (Capex) costs. This configuration maximizes enterprise control and security and combines that with maximal developer outreach and scale with a utility cost model.

This may seem like the best of both worlds. Why even consider other models? In practice this model may be extended and combined with the others. For example, by adding a developer portal on-premise to better serve internal developers with improved latency and more IT-architect control. It’s not about exclusive choices, but about understanding the benefits of each of the interconnections.

Pure SaaS

conceptarch04

This is the full on-demand model. In this configuration, both developers and the API traffic are managed in a multi-tenant SaaS cloud. In the pure SaaS model, API traffic hits the cloud first and is managed against Enterprise policies for quotas, throttling, and authentication/authorization. Analytics are processed in the cloud and the API call is securely routed back down to the Enterprise. The SaaS portal is skinned to conform to the customer’s branding, has the ability to integrate web content of the customer’s choosing, and is branded with URL of the customer’s choosing so that as far as the developers are aware, the portal is owned and operated by the customer.

Due to the fact that enterprises use the cloud elastic model in this case, both for scaling and for costing, the Opex prices can be multitudes cheaper than the heavy initial investment that might be required in the previous models. In one sense, this is comparing apples and oranges: In the opex model you trade the higher up-front costs of running and maintaining your own servers with a lower monthly fee, but as we mentioned before, there may be reasons for both: A large Enterprise may run a SaaS API program for their marketing department and an internal API management program for their IT department supporting a new mobility strategy. The SaaS API option maximizes developer scale and has the lowest maintenance costs. Plus, the enterprises will require fewer resources to run and maintain the deployment. This is the option best suited for having instant updates to the API management platform with minimal downtime and high performance through CDN caching and managed fail-over and resiliency.

It is never one size fits all when it comes to API management. Each situation is different based on specific needs. Examine the different deployment options carefully, and see what will work best for you, keeping in mind that these deployment models are NOT mutually exclusive as you can combine them.

When we built our API 2.0 platform, by combining Intel and Mashery solutions, we took all of the above into consideration. Not only will we not limit you to a specific deployment model, but also will we help you transition between deployment models with ease.

We just recently announced the combined solution, API 2.0 platform that combines our strengths. Check us out at cloudsecurity.intel.com.

  EverythingLocal Virtual PrivateCloud Hybrid SaaS Pure SaaS Custom Built
Initial cost

$$$

$$

$$

$

$$$

Ongoing costs

$

$$

$$

$$$

$$$

Level of Control

High

High

Medium

Low

High

Risk & CompliancePosture

High

Medium

High

Lower

High

Flexibility

High

High

Medium

Medium

Medium

Scalability

Medium

High

High

High

Low

Ideal for

Internal/Partner

Developers

Internal/Partner

Developers

Public/ Partner

Developers

Public/ Partner

Developers

Mostly Internal

Cloudification

Not Offered

Built-In

Partial

Built-In

Maybe