Guest blog from Simon Minton, Global Cyber Security Advisor at Cisco
Find out why taking a Zero Trust approach to developing and provisioning apps can help prevent security breaches
Sharing meeting notes. Processing customer transactions. Logging expenses. Signing contracts. More and more business processes are getting the app treatment. And that means more and more data is being exposed to potential security threats.
To ensure apps deliver on stakeholders’ agility and efficiency expectations, organisations are increasingly using the cloud to provision functionality to users both in the workplace and beyond. Apps aren’t just being provisioned via the cloud; they are being developed in the cloud too – and that introduces another layer of complexity and risk.
Cloud-native development enables organisations to build and update apps quickly. But the speed at which apps evolve can result in security being overlooked – especially as organisations increasingly bring application development back in-house due to its strategic and competitive importance.
Join the DevSecOps revolution
The need to balance security with agility has given rise to a new operating model in the app development world. DevSecOps isn’t just about adopting new processes and tools; it’s about adopting a new mindset in which everyone in the app lifecycle is responsible for security – whether they are a developer, a business stakeholder or a user.
DevSecOps shifts security from a bolt-on activity late in the process of application development, when much of the architecture has already been defined, to a fundamental part of design, build and continuous delivery.
In order for DevSecOps principles to take root in an organisation, developers need to be encouraged to take ownership of security, much like they are incentivised to develop metrics around application availability and performance.
Most data breaches occur from two interlinking scenarios; an exploitation of either the application itself and/or exploitation of the infrastructure hosting the application. Several recent high profile breaches occurred because of a misconfiguration of the supporting cloud infrastructure. The shared security model adopted by all cloud providers puts the onus on its customers to ensure that cloud services are properly configured.
Ensuring developers and IT security teams work together to proactively remediate misconfigurations in an application or infrastructure can help to reduce the impact from an incident or breach. Data analytics will be increasingly important for both teams when pinpointing application and cloud misconfigurations as well as malicious activity.
Monitoring solutions that leverage machine learning and behavioural modelling can provide visibility of activity not only on the network but also within the development environment and across cloud resources – which can act as an early warning of a potential security breach on an app or within the broader ecosystem.
For example, Cisco Stealthwatch collects and analyses network and cloud telemetry and correlates threat behaviours seen locally within the enterprise with those seen globally to detect anomalies that might be malicious.
To trust or not to trust
Advanced threat detection solutions can also help to identify policy violations and misconfigured cloud assets that could compromise the future security of an app. But visibility into potential app vulnerabilities needs to go one step further.
With internal and external developers increasingly using internet-based open source elements, such as software libraries, to accelerate time-to-market, apps have become a patchwork of unseen – and often unknown – components. All of which could introduce unexpected risks and dependencies.
Around 80% of an enterprise application is created using open source software libraries downloaded from the internet. Organisations often have very limited understanding of the risks inherent in these libraries or lack the policies needed to remediate known vulnerabilities.
By adopting a Zero Trust approach (where everything must be validated before it can be trusted) to app development, organisations will be able to identify potential security flaws much earlier. This will not only save time and money but also avoid reputational damage.
A Zero Trust approach can also be extended beyond the development stage to the entire lifecycle of the app. Users and devices accessing apps also need to be regularly validated to ensure they are not trying to launch an attack or steal data.
By getting smarter about how they provision and develop apps from the cloud, organisations will be able to protect thousands of employees and customers and provide a richer and safer app experience.
Alright I admit it, I’m jealous. I joined a start-up! I’ve seen Silicon Valley! We were going to change the world, I was going to be rich beyond the dreams of avarice, leave the rat race behind and open a beach bar somewhere. But you’ll have guessed by the fact that I’m writing this blog that that never happened. With hindsight, I would have joined Frame, the (fairly) new face of cloud-hosted application delivery. Their premise is simple; run any Windows application in the cloud and access it via a browser, no plugins required.
Originally called MainFrame2, the company began life enabling ISVs to offer applications as a service. It got off to a good start but its fortunes improved massively when the focus changed to end users and the business was relaunched as Frame. With recent investments from Microsoft Ventures, Bain Capital Ventures and In-Q-Tel growth continues at pace. On top of that they recently signed a major partnership with VMware to become part of their Workspace One offering with App Express.
Frame is essentially an Application-as-a-Service company, built for the cloud in the cloud. You install the applications into a sandbox environment and then, when you are ready, publish them to the Frame Desktop (as above) for users to consume. Your applications are installed onto Windows 2012 servers (the roadmap is for Windows 2016 and 10 soon) with the ability to make use of the GPUs offered by AWS and Azure to handle even the most intensive graphical applications. Those screen images are then delivered by Frame’s encrypted and highly compressed display-protocol to the end user allowing any application to run on almost any computer. Removing the complexity usually associated with virtual desktop computing to a few clicks.
So what are the uses for technology like this? Here are a few examples:
- Think about those expensive CAD and desktop publishing packages. With Frame you can centralise them in the public cloud of your choice, share the licensing costs, utilise cloud storage to make collaboration easy and reduce the need for expensive workstation hardware*
- Consider the education sector and the ability to use inexpensive Chromebooks to access any type of application and then not having to pay for those resources during the holidays
- Mobilise legacy business applications by migrating them to the cloud and using Frame to provide browser-based access without having to install anything on the client
* and not just hardware as Microsoft have brought in a new Windows 10 Pro for Workstation licence that affects any machine that has an Intel Xeon or AMD Opteron processor.
However, Frame is not for everyone or every use case. It’s not going to be a way to deal with legacy applications to aid that Windows 10 migration. If it won’t install on Windows Server 2012 it isn’t going to work. You also need to understand your responsibilities as a customer. Although you don’t need to licence the OS you still need to patch it, supply your own anti-virus client, update those applications and then secure the network access to it. And don’t think you can escape the fun that is Evergreen!
Cost-wise there’s a $ per month, per-user charge based on standard, pro or enterprise levels of functionality. Then an hourly rate based on usage and the resources that your VMs consume. Automation is key to controlling those costs ensuring that machines are not costing you money when they aren’t being used. There are features within the administrative console and the REST API to schedule the number of machines available and for those machines to be powered off when they aren’t required. Calculating the overall cost, like a lot of cloud initiatives, is not an easy one though and may not be necessarily cheaper than your current on premises solution. But there are features and functionality that no on premises solution will ever give you.
The big differentiator for Frame is its simplicity and ease of use. When you need to bring additional services you just plug them in. You need identity services? Frame supports them. You want to use your user profile management tool? No problem. Want to connect to Dropbox, Box or Google Drive? A couple of clicks and it’s setup, appearing as a mapped drive within the Frame explorer. Want to share your session with someone else to work on a document or drawing simply email them a link to the session? Need additional local storage or a database? Just click the utility server option and select your services.
Just as data and business applications are moving to the cloud, it makes sense for client applications to follow them. Another nice thing about Frame is that where companies utilise multiple clouds you have the ability to place your applications in the best location to serve them avoiding any lock-in. Also, as client estates become more diverse and the demand from users to work from anywhere increases so the ability to deliver applications simply through a browser becomes increasingly enticing.
Frame is very cool technology. If you’re currently considering XenApp running in Azure or XenApp Essentials, or considering at how to mobilise those legacy applications, then you need to take a look. There are limitations as to where it fits as a solution but where it is right there are clear benefits. Frame enables powerful applications to be accessed from almost any device. It enables applications to be delivered to an entire business anywhere in the world minutes after installing it once, regardless of the endpoint they are using.
So my dalliance with the world of start-ups was not a great success. For the guys at Frame I can see a much brighter future. The question though is how long will it last before someone swallows them up?
Picture this – your alarm clock goes off, you reach across the bed and take a look at your phone; it’s woken you up 30 minutes early – why? Well you have a meeting at 9:30am, but your car is running low on fuel so filling up will take 15 minutes, and traffic is a little worse than normal, so it will take an extra 15 minutes to get to the meeting. Welcome to the Internet of Things (IoT) a world where your phone can play your day ahead and your fridge knows when it’s running dry and orders the groceries itself.
IoT has captured the imagination of industry visionaries and the public for some time now; devices sending and receiving data, opening the door to a futuristic world previously the stuff of science fiction.
As the cities we live in grow into digital ecosystems, the networks around us will connect every individual device, enabling billions of new data exchanges. Industries will enter a new era, from medical devices that talk directly to medical professionals, to the emergence of smart homes that manage themselves efficiently, ensuring energy usage is checked and bills paid on time.
In the workplace it’s equally easy to see the potential advantages of the connections between devices, from intelligent service desk support through to printers, computers and other devices interacting with each other to deliver tangible user and business benefits.
The service desk is a key component for businesses in the digital age, acting as a communication hub for IT issues, a reference point for technology requirements and a tool for asset visibility. Organisations must ask themselves if their current service desk has the technological capacity and capability to manage the multitude of device and operational data in an efficient manner. An intelligent service desk can be the lifeblood of IoT implementation within businesses and enable automation to be realised.
A connected printer in a business ecosystem, for example, could effectively self-serve its own peripheral needs and order its own supplies when needed. However, the management of that data, effective registration and logging of the incident, as well as notification to the financial and technical teams would not be possible without an intelligent service desk – especially when you elevate this to an enterprise scale, with possibly hundreds of connected printers or devices.
When discussing the “connected office”, IT managers will understandably raise concerns around security. The more devices that are connected, the further the periphery is pushed, increasing potential entry points there are into a network.
An intelligent service desk will enable whitelisting to be integrated into communication protocols. This is a process which gathers and groups trusted individuals and their devices into a known category. This will enable any unusual requests from either IoT enabled devices or employee requests to be automatically flagged and questioned before action or access is given.
It is in this scenario that IT managers can reap the benefits of IoT, service desk and employee synchronisation. Through the IoT device communicating with the service desk, the service desk effectively managing all end points and the employee working in tandem with the service desk software, the minimisation of internal security risks can be achieved.
While much of this sounds quite out of reach, the benefits of IoT and service desk communication are already evident today, through use cases that are currently very fluid, personalised and often driven by an imaginative use of existing and sometimes emerging technology. Peripheral IT product vending machines holding keyboards and mice, for example, allow the realisation of this relationship to be seen.
However, with so much data being transferred and the IoT still very ‘new’, there are a number of challenges, the most critical being visibility of assets connected and operating under the network.
Communication between all end points and visibility should be fundamental considerations when planning for an IoT based implementation. Intelligent service desks, that can enrich the IT support experience as well as integrate and communicate with the business ecosystem, can host the technology capability to have oversight, communication and visibility of device end points communicating with a network.
While this may appear to be a straightforward concept, often enthusiasm to implement and complexity of service desk and technology transformation has a tendency to drown out and bypass the fundamentals – leaving potential backdoors open.
To ensure that there is a holistic approach toward securing connections with the IoT, organisations must challenge all stakeholders (vendors, integrators and consultants) to apply secure IoT principles to the service desk solution and IT operational unit, right from the “drawing board” phase.
Last month, we had the privilege of attending the 25th Annual IT Service Management (ITSM) conference in London. It was great to see so many energised service management comrades at the event, where we delivered a keynote presentation on Computacenter’s Next Generation Service Desk (NGSD) solution that we deployed with Hays recruitment. This was the first time presenting at the conference and it certainly lived up to expectations. I co-presented with Simon Gerhardt, who was the main lead on the NGSD project and acts as the IT Operations Director at Hays Recruitment.
As a leading recruitment services company Hays is dependent on the productivity of its employees, and with technology playing an ever-increasing role in streamlining the recruitment process, employees’ IT queries and issues need to be dealt with quickly. To give users greater choice about when and how they engage, we worked with Hays to help digitise their IT support through NGSD. The solution offers an online portal and intuitive mobile app which allows employees, to obtain a user-centric experience with anytime, anywhere IT support and a wealth of knowledge banks to self-serve their own IT issues.
We had a strong attendance during the presentation itself and it was great to see a wide range of engaged, seasoned professionals in one room, all willing to listen and pose questions. During our presentation, I detailed some common trends around service desk demand as well as workforce expectation in relation to technology, that allowed us to illustrate the vitality of intelligent service desk’s in modern organisations. We drew on a variety of research studies and customer feedback reports, that uncovered statistics such as 53% of employees are frustrated by a lack of flexibility in working practices and 41% of workforces will consider moving to a new role if they don’t get the support they require. Simon Gerhardt, did a great job illustrating the tangible impact that NGSD has delivered for Hays Recruitment.
It was fantastic to be able to relive some of the challenges, successes and outcomes of the NGSD project with Hays, to an audience completely unfamiliar with the solution itself.
A key highlight in the delivery of the project that raised eyebrows and encouraged positive feedback was focused around the implementation timeline. In the early stages of the project, we conducted a one day hot-house with the internal members of the Hays development team, that played a big part in the rapid implementation time of the project. By having a collaborative approach to the solution design and the onboarding process for the wider workforce, it fast tracked a number of weeks of traditional planning, meaning NGSD went live on time, in just eight weeks.
As well as implementation timeline, the lack of disruption to working norms during the implementation itself, stood out as a notable crowd pleaser. NGSD can integrate seamlessly into any IT service management platform, which removes a major challenge when transforming the key system of engagement for a given workforce. The existing platform that the employee interacts with remains unchanged as the underlying platform is being updated to the NGSD solution.
This means that no disruption is caused from the integration of the solution, maximising productivity and eliminating IT downtime.
Don’t take my word for it, Barclay Rae Interim CEO at ITSMF UK said when he awarded Computacenter with the SDI Best Managed Service Desk award – “The three finalists all demonstrated mature service desk operations plus excellent customer engagement and relationships. What marked Computacenter out was their practical focus on innovating for their customers’ customers. Their ‘next generation service desk’ showed how MSPs can lead for their customers and the industry by driving through solutions and innovations that deliver direct customer experience and continual service improvement. This is a great example for the MSP community”
I would certainly recommend all service management professionals to attend the ITSMF event, it is an excellent platform to meet with and network across the industry.
See you next year!
2014 really was the year that was. Information Technology (IT) has for quite a while threatened to play such a fundamental role in our lives that we would struggle to function without it. In my opinion 2014 was the tipping point year where the silos between “technology” at home, play or work blurred into one – “a SMART one”. Through 2014 something SMART with a processor, memory, storage and a battery at its heart became the secondary brain that the developed/developing world leveraged to optimise and enhance “living”. Personal & work smartphones became just “smartphones” as BYOD moved from a disruptive marketing fad to an important catalyst for end user behavioural change within organisations. Mobile working, once the poor relation of “working in the office” became the must have work mode through 2014 opening the door to transformed organisational working outcomes through 2015 – watch this one as it should be the biggest technology user led transformation yet.
The internet of “stuff” (I’m bonding the Internet or Things and Everything) with sensor packed connected devices always on and transmitting data across the wireless airspace emerged as the new battleground for customer service and market control. The IOT/IOE topic gained a head of steam through 2014 but watch it fly through 2015 as connected devices leverage harmonised data to really behave in a “human SMART” manner. And as I briefly continue with the key stories of 2014, I will be remiss not to discuss the shift from “cloud HYPE” to “cloud RIPE” as cloud service providers on mass utilising software-defined datacenter, network and security ideals presented an increasing portfolio of real world, customer validated services that deliver essential outcomes to a now captive and receptive enterprise audience. Cloud is now here ………..
Phew – all in all there was an abundance of IT good news through 2014 that should act as a springboard for greater things through 2015. But was it all good news? Back to the recap, an ever increasing population of mobile device users, generating masses of then stored or transmitted information, talking to sensors that transmit or store masses of information, that interact with enterprise IT systems that process and store a mass of information and so on and so on must be a good thing. When leveraged for beneficial personal, customer, enterprise or society based reasons the potential to drive value is unparalleled. However that same footprint of rich, relevant, always increasing data/information is equally digital gold for hackers who aim to utilise it in completely different manner.
The result, 2014 also saw a rise to unprecedented levels of one of the biggest concerns now at the executive top table, “security breaches”. With hacks now the norm within end user, offline / online enterprises and even nation states, 2014 and the mass of data moving freely around the heavily digitised world changed the importance personal consumers and enterprise organisations placed on information security. Since the dawn of the modern IT era, IT security has been just that “security for IT devices” often developed and managed by technologists. 2015 will see a major acceleration of a trend already permeating the enterprise with IT security a fundamental core of “enterprise information security” (that adopts a holistic view of enterprise end to end business security posture that includes IT). Security not a top priority through 2015? – not an option!
But no more talk about 2014, 2015 is here and its now. If 2014 was a dry run for the new face of people centric, end user fulfilling IT, 2015 is the year to make it happen. The end user is now king and long live the king (and queen). Stay tuned as we continue with this topic – (well at least for another 11 months).
Until next time.