Author Archive

Greg KH Readies for Collaboration Summit, Talks Raspberry Pi

Posted by on Wednesday, 21 March, 2012

Linux kernel maintainer and Linux Foundation Fellow Greg Kroah-Hartman will be moderating the highly-anticipated Linux kernel panel at the Collaboration Summit in a couple short weeks. He was generous enough to take a few moments recently to answer some questions about what we might hear from the Linux kernel panel, as well as some details on his recent work and projects. Oh, and we couldn’t resist asking him about the new Raspberry Pi.

You will be moderating the Linux Kernel panel at the upcoming Linux Foundation Collaboration Summit. These are big attractions for attendees. What do you anticipate will be on the kernel panel’s mind during that first week in April?

Kroah-Hartman: Odds are we will all be relaxing after the big merge window for the 3.4-rc1 kernel. Also, the Filesystem and Memory management meetings will have just happened, so lots of good ideas will have come out of that.

This panel moderation role comes after two Q&A-style keynote sessions with Linus last year to celebrate 20 years of Linux. How does moderating a panel of developers differ from interviewing Linus on stage?

Kroah-Hartman: I will need to bring more than just one bottle of whisky 🙂

Seriously, it’s much the same, but instead of just one person answering questions, there are three different viewpoints being offered, which can result in the conversation leading places you never expect. An example of this would be the kernel panel that happened last year at LinuxCon Japan, where the developers on stage got into a big technical argument with the kernel developers in the audience, much to the amusement of the rest of the audience. If done well, it can show the range of ideas the the kernel developer community has, and how while we don’t always agree with each other, we work together to create something that works well for everyone.

You recently released Linux kernel 2.6.32.58 but cautioned that you would no longer be maintaining version 2.6.32 and recommended folks switch to Linux 3.0. Is there anything else you’d like to say about people moving to Linux 3.0?

Kroah-Hartman: For a longer discussion on the history of the 2.6.32 kernel, please see the article I posted recently. Almost no end user will be building their own kernel and need to know the differences here; their distro handles this for them automatically. But, for the technical user, they know how to build their own kernels, and moving to the 3.0 kernel release should provide no problem at all. If it does, please contact the kernel developers on the linux-kernel mailing list with their problems and we will be glad to work through it with them.

Can you give us some updates on the Device Driver Project and/or LTSI?

Kroah-Hartman: There’s nothing new going on with the Device Driver project other than we are continuing to create drivers for companies that ask for them. I know of at least two new drivers going into the 3.4 kernel release that came from this process, and if any company has a need for a Linux driver, they should contact us to make this happen.

LTSI is continuing forward as well. Our kernel tree is public, and starting to receive submissions for areas that users are asking for. I’ve been working with a number of different companies and groups after meeting with them at ELC 2012 to refine how LTSI can best work for their users. There will be a report at LinuxCon Japan 2012 in June about what is happening with LTSI since the last public report at ELC.

Have you seen the Raspberry Pi? Sold out in a day. Any chance you’ve gotten your hands on one? If so, what’s your reaction?

Kroah-Hartman: I have not seen one in person, but will be trying to get one (I signed up for one as soon as it went on sale, but was too late.) It looks like a great project, much like the BeagleBone and Pandaboard, both of which I have here and use for kernel testing. Hopefully the Raspberry Pi developers can get their kernel patches into the mainline kernel.org release soon, so that it is easier for users to take advantage of their hardware.


NYSE Opens Up About Giving Up Control

Posted by on Thursday, 15 March, 2012

Things are really heating up in anticipation for the Sixth Annual Linux Foundation Collaboration Summit taking place April 3-5, 2012. Earlier this week, we talked to Gerrit Huizenga about Linux and cloud computing, and Amanda McPherson shared a peek at the behind-the-scenes work that will take place at The Linux Foundation’s Member Legal Summit on April 2.

We also had the opportunity to talk to NYSE Technologies’ Head of Global Alliances Feargal O’Sullivan. He will be a keynote presenter at the Collaboration Summit and will be talking about “Open Middleware Standards for the Capital Markets and Beyond.”

Can you give us a bit of a teaser on your keynote presentation and tell us how NYSE Technologies identified an opportunity to open source its messaging API and help create the OpenMAMA project?

O’Sullivan: We considered open sourcing our Middleware Agnostic Messaging API for a number of years before finally making it happen late last year. One of the major reasons to do so was to allow our community of users to help develop the additional middleware ‘bridges’ we wanted to support faster than we could on our own. Of course, we were concerned about losing control of the process and, quite frankly, about opening our revenue generating Market Data Platform to increased competition.

The change came around January 2011 when we first presented the idea to our Technical Advisory Group. We proposed it as part of our overall strategy of building a community around an open infrastructure platform with common standards for capital markets participants. The idea received unanimous support and a level of enthusiasm that took even us by surprise. What it told us is that the industry suffered from ‘vendor lock-in’ due to proprietary APIs, which stifle both competition and innovation, as well as increasing total cost of ownership.

OpenMAMA returns choice to the user, forcing vendors to compete on features and value, which is better for everyone.

What is your biggest lesson learned that you can share with others who might be considering open sourcing technology?

O’Sullivan: Our biggest lesson learned was not to try to go it alone! When we first engaged The Linux Foundation, we had little experience in open sourcing software. We quickly learned that for OpenMAMA to be successful it needed the neutrality and credibility of being a truly open source project. That isn’t as simple as it sounds; had we chosen the wrong license, or hosted OpenMAMA on a server in one of our data centers, it would have seriously undermined the project. Without the benefit of The Linux Foundation’s experience, we wouldn’t have known any better until it was too late.

What do you consider the advantages of open sourcing this technology?

O’Sullivan: OpenMAMA’s true value lies in its agnostic architecture, which allows developers to code to a single API while enabling administrators to switch between supported middleware platforms to meet the requirements of the environment where the application is deployed. However, before being open sourced, MAMA only supported middleware platforms that made commercial sense for NYSE Technologies to develop. This meant leaving out other valuable middleware platforms because we didn’t have the time or resources to support each one. Open sourcing unlocks the full potential of the API by giving control to the end users. Ultimately, OpenMAMA will make NYSE Technologies’ clients happier and our products more functional.

How is the OpenMAMA project doing? Can you give us some updates?

O’Sullivan: So far, the OpenMAMA project has demonstrated a level of success that even we are finding hard to believe. Our approach was to open the C portion of the API at the launch in October 2011 and then contribute the remaining functionality in April 2012. In parallel we formed the OpenMAMA Steering Committee comprised of users, vendors and direct competitors, to govern the project. This gives the committee time to form a cohesive group and set the direction of the project, while in parallel giving the technical working groups time to evaluate the code and decide what their priorities are for the roadmap. On April 30, when we contribute the final pieces of the API, and when everyone gathers for The Linux Foundation’s Enterprise End User Summit (which we are hosting at the New York Stock Exchange this year), the community will be fully prepared to take this project forward.

We’re definitely looking forward to visiting your space in April! Can you tell us more about your decision to host this year’s Enterprise End User Summit and why the event is a priority for your organization?

O’Sullivan: We at NYSE Technologies have always been keen users of open source technology. Furthermore, it is well known that the entire capital markets community heavily depends on Linux and other open source initiatives. So we see this as the perfect venue to release the final pieces of the OpenMAMA stack and to continue advocating its value proposition to all interested participants.

That, and everyone loves a party!

More details on O’Sullivan’s keynote, as well as the other keynote presentations and sessions can be found on The Linux Foundation Collaboration Summit website. If you’re not already attending, you can still request an invitation.


Can Linux Win in Cloud Computing?

Posted by on Wednesday, 14 March, 2012

Gerrit Huizenga is Cloud Architect at IBM (and fellow Portland-er) and will be speaking at the upcoming Linux Foundation Collaboration Summit in a keynote session titled “The Clouds Are Coming: Are We Ready?” Linux is often heralded as the platform for the cloud, but Huizenga warns that while it is in the best technical position to warrant this title, there is work to do to make this a reality.

Huizenga took a few moments earlier this week to chat with us as he prepares for his controversial presentation at the Summit.

You will be speaking at The Linux Foundation Collaboration Summit about Linux and the cloud. Can you give us a teaser on what we can expect from your talk?

Huizenga: Clouds are on the top of every IT departments list of new and key technologies to invest in. Obviously high on those lists are things like VMware and Amazon EC2. But where is the open source community in terms of comparable solutions which can be easily set up and deployed? Is it possible to build a cloud with just open source technologies? Would that cloud be a “meets min” sort of cloud, or can you build a full fledged, enterprise-grade cloud with open source today? What about using a hybrid of open source and proprietary solutions? Is that possible, or are we locked in to purely proprietary solutions today? Will Open Standards help us? What are some recommendations today for building clouds?

Linux is often applauded as the “platform for the cloud.” Do you think this is accurate? If not, what still needs to be done? If so, what is it about Linux that gives it this reputation?

Huizenga: Linux definitely has the potential to be a key platform for the cloud. However, it isn’t there yet. There are a few technology inhibitors with respect to Linux as the primary cloud platform, as well as a number of market place challenges. Those challenges can be addressed but there is definitely some work to do in that space.

What are the advantages of Linux for both public and private clouds?

Huizenga: It depends a bit about whether you consider Linux as a guest or virtual server in a cloud, or whether it is the hosting platform of the cloud. The more we enable Linux as a guest within the various hypervisors, and enable Linux to be managed within the cloud, the greater the chance of standardizing on Linux as the “packaging format” for applications.

This increases the overall presence of Linux in the market place and in some ways simplifies ISV’s lives in porting applications to clouds. As a hosting platform, one of the biggest advantages for cloud operators is the potential cost/pricing model for Linux and the overall impact on the cost of operating a cloud. And, the level of openness that Linux provides should simplify the ability to support the cloud infrastructure and over time increase the number of services that can be provided by a cloud. But we still have quite a bit of work to do to make Linux a ubiquitous cloud platform.

What is happening at the Linux development level to support the rapidly maturing cloud opportunity? What does the community need from other Linux users and developers to help accelerate its development and address these challenges?

Huizenga: I’ll talk about some of the KVM technologies that we need to continue to develop to enable cloud, as well as some of the work on virtual server building & packaging, DevOps, Deployment, and Management. There are plenty of places for the open source community to contribute and several talks at the Collaboration Summit should dive further into the details as well.

What do you make of Microsoft running Linux on Azure?

Huizenga: Anything that lets us run Linux in more places must be good!

More information about Huizenga’s talk can be found on The Linux Foundation Collaboration Summit schedule. If you’re interested in joining us, you can also request an invitation to attend.