Tuesday, May 22, 2018

Linux Kernel 4.14 Released, This is What’s New

Linux Kernel 4.14 Released, This is What’s New


linux kernel
A new kernel appears!
Linus Torvalds has announced the release of a Linux 4.14, the latest stable release of the Linux kernel.
Linux 4.14 features a number of new features and changes, and is set to become the next long term support (LTS) release backed by several years of ongoing maintainence and support.
Announcing the arrival of Linux 4.14 on the Linux Kernel Mailing List (LKML), Linus Torvalds writes:-
“Go out and test the new 4.14 release, that is slated to be the next LTS kernel – and start sending me pull request for the 4.15 merge
window.

Linux 4.14 Features & Changes

Linux 4.14 features a huge stack of improvements to drivers, hardware enablement, file system tweaks, performance tune-ups, and lots more.
One of the “headline” features is support for larger memory limits on x86_64 hardware. The release increases the hard limits to 128PiB of virtual address space and 4PiB of physical address space, up from 256TiB of virtual address space and 64TiB of physical address space.
Other notable changes:
  • New Realtek Wi-Fi driver (RTL8822BE)
  • Btrfs Zstd compression support
  • HDMI CEC support for Raspberry Pi
  • Secure memory encryption for AMD EPYC processors
  • ASUS T100 touchpad support
  • Heterogeneous Memory Management
  • AMDGPU DRM Vega improvements
  • Better support for Ryzen processors
Each kernel update also introduces support for new ARM devices/boards/SoCs. Linux 4.14 introduces support for the Raspberry Pi Zero W, the Banana Pi R2, M3, M2M and M64, Rockchip RK3328/Pine 64, and others.
For a fuller look at everything that’s new in Linux Kernel 4.14 head over to the official mailing list announcement or the Linux kernel website and follow the links there, or check out Kernel Newbies for a more parseable presentationof the changes.

Monday, May 21, 2018

Ubuntu 18.04: Multicloud Is the New Normal






canonical ceo mark shuttleworth explains functionality of ubuntu 18.04 in international conference call

Canonical last week released the Ubuntu 18.04 LTS platform for desktop, server, cloud and Internet of Things use. Its debut followed a two-year development phase that led to innovations in cloud solutions for enterprises, as well as smoother integrations with private and public cloud services, and new tools for container and virtual machine operations.
The latest release drives new efficiencies in computing and focuses on the big surge in artificial intelligence and machine learning, said Canonical CEO Mark Shuttleworth in a global conference call.
Ubuntu has been a platform for innovation over the last decade, he noted. The latest release reflects that innovation and comes on the heels of extraordinary enterprise adoption on the public cloud.
The IT industry has undergone some fundamental shifts since the last Ubuntu upgrade, with digital disruption and containerization changing the way organizations think about next-generation infrastructures. Canonical is at the forefront of this transformation, providing the platform for enabling change across the public and private cloud ecosystem, desktop and containers, Shuttleworth said.
"Multicloud operations are the new normal," he remarked. "Boot time and performance-optimized images of Ubuntu 18.04 LTS on every major public cloud make it the fastest and most-efficient OS for cloud computing, especially for storage and compute-intensive tasks like machine learning," he added.
Ubuntu 18.04 comes as a unified computing platform. Having an identical platform from workstation to edge and cloud accelerates global deployments and operations. Ubuntu 18.04 LTS features a default GNOME desktop. Other desktop environments are KDE, MATE and Budgie.

Diversified Features

The latest technologies under the Ubuntu 18.04 hood are focused on real-time optimizations and an expanded Snapcraft ecosystem to replace traditional software delivery via package management tools.
For instance, the biggest innovations in Ubuntu 18.04 are related to enhancements to cloud computing, Kubernetes integration, and Ubuntu as an IoT control platform. Features that make the new Ubuntu a platform for artificial intelligence and machine learning also are prominent.
The Canonical distribution of Kubernetes (CDK) runs on public clouds, VMware, OpenStack and bare metal. It delivers the latest upstream version, currently Kubernetes 1.10. It also supports upgrades to future versions of Kubernetes, expansion of the Kubernetes cluster on demand, and integration with optional components for storage, networking and monitoring.
As a platform for AI and ML, CDK supports GPU acceleration of workloads using the Nvidia DevicePlugin. Further, complex GPGPU workloads like Kubeflow work on CDK. That performance reflects joint efforts with Google to accelerate ML in the enterprise, providing a portable way to develop and deploy ML applications at scale. Applications built and tested with Kubeflow and CDK are perfectly transportable to Google Cloud, according to Shuttleworth.
Developers can use the new Ubuntu to create applications on their workstations, test them on private bare-metal Kubernetes with CDK, and run them across vast data sets on Google's GKE, said Stephan Fabel, director of product management at Canonical. The resulting models and inference engines can be delivered to Ubuntu devices at the edge of the network, creating an ideal pipeline for machine learning from the workstation to rack, to cloud and device.

Snappy Improvements

The latest Ubuntu release allows desktop users to receive rapid delivery of the latest applications updates. Besides having access to typical desktop applications, software devs and enterprise IT teams can benefit from the acceleration of snaps, deployed across the desktop to the cloud.
Snaps have become a popular way to get apps on Linux. More than 3,000 snaps have been published, and millions have been installed, including official releases from Spotify, Skype, Slack and Firefox,
Snaps are fully integrated into Ubuntu GNOME 18.04 LTS and KDE Neon. Publishers deliver updates directly, and security is maintained with enhanced kernel isolation and system service mediation.
Snaps work on desktops, devices and cloud virtual machines, as well as bare-metal servers, allowing a consistent delivery mechanism for applications and frameworks.

Workstations, Cloud and IoT

Nvidia GPGPU hardware acceleration is integrated in Ubuntu 18.04 LTS cloud images and Canonical's OpenStack and Kubernetes distributions for on-premises bare metal operations. Ubuntu 18.04 supports Kubeflow and other ML and AI workflows.
Kubeflow, the Google approach to TensorFlow on Kubernetes, is integrated into Canonical Kubernetes along with a range of CI/CD tools, and aligned with Google GKE for on-premises and on-cloud AI development.
"Having an OS that is tuned for advanced workloads such as AI and ML is critical to a high-velocity team," said David Aronchick, product manager for Cloud AI at Google. "With the release of Ubuntu 18.04 LTS and Canonical's collaborations to the Kubeflow project, Canonical has provided both a familiar and highly performant operating system that works everywhere."
Software engineers and data scientists can use tools they already know, such as Ubuntu, Kubernetes and Kubeflow, and greatly accelerate their ability to deliver value for their customers, whether on-premises or in the cloud, he added.

Multiple Cloud Focus

Canonical has seen a significant adoption of Ubuntu in the cloud, apparently because it offers an alternative, said Canonical's Fabel.
Typically, customers ask Canonical to deploy Open Stack and Kubernetes together. That is a pattern emerging as a common operational framework, he said. "Our focus is delivering Kubernetes across multiple clouds. We do that in alignment with Microsoft Azure service."

Better Economics

Economically, Canonical sees Kubernetes as a commodity, so the company built it into Ubuntu's support package for the enterprise. It is not an extra, according to Fabel.
"That lines up perfectly with the business model we see the public clouds adopting, where Kubernetes is a free service on top of the VM that you are paying for," he said.
The plan is not to offer overly complex models based on old-school economic models, Fabel added, as that is not what developers really want.
"Our focus is on the most effective delivery of the new commodity infrastructure," he noted.

Private Cloud Alternative to VMware

Canonical OpenStack delivers private cloud with significant savings over VMware and provides a modern, developer-friendly API, according to Canonical. It also has built-in support for NFV and GPGPUs. The Canonical OpenStack offering has become a reference cloud for digital transformation workloads.
Today, Ubuntu is at the heart of the world's largest OpenStack clouds, both public and private, in key sectors such as finance, media, retail and telecommunications, Shuttleworth noted.

Other Highlights

Among Ubuntu 18.04's benefits:
  • Containers for legacy workloads with LXD 3.0 -- LXD 3.0 enables "lift-and-shift" of legacy workloads into containers for performance and density, an essential part of the enterprise container strategy.
    LXD provides "machine containers" that behave like virtual machines in that they contain a full and mutable Linux guest operating system, in this case, Ubuntu. Customers using unsupported or end-of-life Linux environments that have not received fixes for critical issues like Meltdown and Spectre can lift and shift those workloads into LXD on Ubuntu 18.04 LTS with all the latest kernel security fixes.
  • Ultrafast Ubuntu on a Windows desktop -- New Hyper-V optimized images developed in collaboration with Microsoft enhance the virtual machine experience of Ubuntu in Windows.
  • Minimal desktop install -- The new minimal desktop install provides only the core desktop and browser for those looking to save disk space and customize machines with their specific apps or requirements. In corporate environments, the minimal desktop serves as a base for custom desktop images, reducing the security cross-section of the platform. 

Remote Linux and Windows Support Available

If Anyone need 1.Remote Linux and Windows Server Support 2. Free Lance AWS Training 3. Netowrking 4.Backup 5. Huge volume of Data Enry 6. Setup and Maintaining remotely VoIP with Asterisk Box (Opensource)  7. Setup and Maintaining remotely Free NAS  (Opensource) 8. OpenVPN (Soft VPN opensource software) 9. Setup and Maintaining Servers (windows and Linux) in Cloud. 10. Setup and Providing Support for Symantec Endpoint Protection Client/Server in Windows. 11.Web site Development Static and Dynamic (Html, Wordpress ,Drupal and Magneto) Please contact V.Ganesh 7418060809 E-Mail : mkv.ganesh@gmail.com

Saturday, November 25, 2017

Microsoft brings Apache Spark, Cassandra, MariaDB to its Azure cloud

Microsoft has brought several third-party popular platforms to its Azure cloud aimed at developers and data analysts.


Microsoft brings Apache Spark, Cassandra, MariaDB to its Azure cloudThe new Azure capabilities include:
  • Azure Databricks, a beta Apache Spark cluster computing platform for developers to get insights out of enterprise data. Developers can request to participate in the beta.
  • An API for running the Apache Cassandra NoSQL database as a service on Azure. This service leverages Microsoft’s Azure Cosmos DB, a globally distributed database. Developers can use familiar Cassandra tools. Microsoft is offering a signup for the API by logging into an Azure account.
  • An upcoming preview of Azure Database for MariaDB, a fork of MySQL. Developers can sign up for the beta.
  • Azure Devops Projects, a beta service for configuring a devops pipeline. Azure Devops Projects lets developers set up Git repositories and automate build and release pipelines.

AWS launches C5 instances for EC2 alongside new 'cloud-optimised' hypervisor



Amazon Web Services (AWS) has announced the availability of C5 instances, aimed at more compute-intensive workloads for the EC2 cloud.
The C5 instances – three from the sharp end in Amazon’s compute class, behind G2, P2 and F1 – were introduced as the newest iteration back in November last year at the company’s Re:Invent show. The C5 promises 3.0 GHz Intel Xeon Scalable processors and double the vCPU and memory capacity – up to 72 vCPUs and 144 gibibytes of memory – when compared with previous C4 instances.
Applications the C5 instances are better equipped to handle include batch processing, distributed analytics, high performance computing (HPC), ad serving, video encoding, and multiplayer gaming. The instances will be available in three regions; US East (N. Virginia), US West (Oregon), and EU (Ireland), with support for additional regions in the pipeline.
Alongside this, AWS dropped a few customer names into the mix. One customer is particularly well-known – having been analysed by this publication on several occasions – and is arguably the poster child for AWS itself. Netflix said it saw up to a 140% performance improvement in industry standard CPU benchmarks compared with C4.
For the high performance computing side, Alces Flight offers researchers on demand HPC clusters, or ‘self-service supercomputers’ in minutes. The company, a member of the AWS Marketplace, said C5 had a ‘direct benefit’ for its user base ‘on both price and performance dimensions.’
The press materials also made mention of a new hypervisor which AWS is rolling out for C5 instances to ‘allow applications to use practically all of the compute and memory resources of a server, delivering reduced cost and even better performance.’
According to this page, accessed by CloudTech earlier today (screenshot), and first spotted by The Register, the new hypervisor for Amazon EC2 “is built on core Linux Kernel-based Virtual Machine (KVM) technology, but does not include general purpose operating system components.”
KVM’s best known user in this sphere is Google. In January this year, the search giant issued a blog post advocating seven methods they use to security harden the KVM hypervisor. As Ariel Maislos, CEO of Stratoscale, pointed out in this publication last year, AWS has long been partnered with Xen for its hypervisor needs.
The FAQ page added that all new instance types will ‘eventually’ use the new EC2 hypervisor, but for now some new instance types will use Xen ‘depending on the requirements of the platform.’ Yet, as The Register reports, references to KVM have been disappearing from the company’s pages.

Friday, November 17, 2017

NetApp unveils new Hybrid Cloud services

NetApp's Hybrid Cloud innovations allow customers to break down barriers to transformation by helping them to unify data across the widest range of cloud and on-premises environments.
Cloud firm NetApp on Friday introduced new Hybrid Cloud offerings to help customers efficiently use data for competitive advantage. 

NetApp introduced "NetApp HCI", the industry's first enterprise-scale hyper-converged solution for better performance, independent scaling, and Data Fabric integration. 

The other offerings include new consumption purchase models and improved all-flash capabilities. 

"The announcements of enterprise-scale HCI and new Hybrid Cloud .. 

NetApp's Hybrid Cloud innovations allow customers to break down barriers to transformation by helping them to unify data across the widest range of cloud and on-premises environments. 


Wednesday, November 15, 2017

What are shielded virtual machines and how to set them up in Windows Server ?

What are shielded virtual machines and how to set them up in Windows Server


What are shielded virtual machines and how to set them up in Windows Server
Virtualization can expose data and encryption keys to hackers. Microsoft's shielded virtual machines and Host Guardian Service locks them down.
For all its benefits, the drive to virtualize everything has created a very big security issue: Virtualization creates a single target for a potential security breach. When a host runs 50 virtual machines (VMs) and is attacked, then you have a real problem. One compromised host compromises the 50 VMs running on it, and now you have what I lovingly call a “holy s**t” moment. Because you virtualized, you turned a whole bunch of servers and operating systems into just a couple of files that are super easy to steal.
The industry needs a way to protect against online and offline attacks that could compromise entire farms of VMs. Microsoft has done some work in this area in Windows Server 2016 with the shielded virtual machine, and its sister service, the Host Guardian Service (HGS). Let’s look at what the folks in Redmond have done.

Understanding the security problem with virtualization

Let’s frame the problem as a set of challenges that need to be solved for a security solution to mitigate the issues virtualization poses.
  1. On any platform, a local administrator can do anything on a system. Anything a guest does to protect itself, like encryption, can be undone by a local administrator. This is comparable to a data center, where all of the access control lists and fancy stuff you do on the inside of an operating system running on a racked server doesn’t matter when you can plug hacking tools into a USB port, boot off it, and copy everything there. Or I can take the system off the rack, drive off with it, and boot it up at home. Even drive encryption can be bypassed by some of these tools by injecting malware into boot sequences and stealing keys out of memory.
  2. Any seized or infected host administrator accounts can access guest VMs. As you might predict, the bad guys know this and target these individuals with increasingly sophisticated phishing attacks and other attempts to gain privileged access. The prized targets are no longer individual desktops and poorly protected home machines. The hacking target market has matured. The new targets are VM hosts in cloud data centers, public and private, with 10 or 15 guests on them, almost always packed to the gills with important information and the fabric administrator accounts that control those hosts. This virtualization fabric has to be protected, since more than just the host administrator has the ability to do harm. With VMs, the server administrator, storage administrator, network administrator, backup operator, and fabric administrator all have virtually unfettered access.
  3. Tenant VMs hosted on a cloud provider’s infrastructure (fabric) are exposed to storage and network attacks while unencrypted. The two main points here are: First, being encrypted at rest while not booted is worthless when your VM is infected while it is running in production. Second, the best offline defenses are worthless against network and storage attacks that execute while a machine is on.
  4. As technology currently stands, it is impossible to identify legitimate hosts without hardware based verification. There is no way you can tell a good host from a bad host without some type of function keying off a property of a piece of silicon.
Microsoft’s answer to these four points is new to Windows Server 2016—the shielded VM and the Host Guardian Service.

What is a shielded virtual machine (VM)?

A shielded VM protects against inspection, theft, and tampering from both malware and data center administrators, including fabric administrators, storage administrators, virtualization host administrators, and other network administrators.
Let me explain how a shielded VM works: It is a Generation 2 VM. The main data file for the VM, the VHDX file, is encrypted with BitLocker so that the contents of the virtual drives are protected. The big problem to overcome is that you must put the decryption key somewhere. If you put the key on the virtualization host, administrators can view the key and the encryption is worthless. The key has to be stored off-host in a siloed area.
The solution is to equip the Generation 2 VM with a virtual trusted platform module (vTPM) and have that vTPM secure the BitLocker encryption keys just like a regular silicon TPM would handle the keys to decrypt BitLocker on an ordinary laptop.  Shielded VMs run on guarded hosts, or regular Hyper-V hosts that are operating in virtual secure mode—a setting that provides process and memory access protection from the host by establishing a tiny enclave off to the side of the kernel. (It doesn’t even run in the kernel, and all it does is talk with the guardian service to carry out the instructions about releasing or holding on to the decryption key.)

What is the Host Guardian Service?

How does the VM know when the release the key? Enter the Host Guardian Service (HGS), a cluster of machines that generally provide two services: attestation, which double-checks that only trusted Hyper-V hosts can run shielded VMs; and the Key Protection Service, which holds the power to release or deny the decryption key needed to start the shielded VMs in question. The HGS checks out the shielded virtual machines, checks out the fabric on which they are attempting to be started and run, and says, “Yes, this is an approved fabric and these hosts look like they have not been compromised. Release the Kraken! I mean keys.” The whole shebang is then decrypted and run on the guarded hosts. If any one of these checks and balances failed, then keys are not released, decryption is not performed, and the shielded VM fails to launch.
How does the HGS know whether a virtual machine is permitted to run on a fabric? The VM’s creator—the owner of the data—designates that a host must be healthy and pass a certain number of checks to be able to run the VM. The HGS attests to the health of the host requesting permission to run the VM before it releases the keys to decrypt the shielded VM. The protections are rooted in hardware as well, making them almost surely the most secure solution on the market today.

How to create shielded virtual machines

Creating shielded VMs is not that different than creating a standard VM. The real difference, apart from being a Generation 2 VM, is the presence of shielding data. Shielding data is an encrypted lump of secrets created on a trusted workstation. This lump of secrets can include administrator credentials, RDP credentials, and a volume signature catalog to prevent putting malware in the template disk from which future secure shielded VMs are created from. This catalog helps validate that the template has not been modified since it was created. A wizard, called the Shielding Data File Wizard, lets you create these bundles. A Protected Template Disk Creation Wizard makes that process run a little more smoothly as well.

Differences between shielded VMs and regular VMs

A shielded VM truly is shielded even from the fabric administrator, to the point where in System Center Virtual Machine Manager or even the bare Hyper-V Manager, you simply cannot connect via VM console to a shielded VM. You must use RDP and authenticate to the guest operating system, where the owner of the VM can decide who should be allowed to access the VM console session directly.
The fabric administrator doesn’t get automatic access. This effectively means that the administrator on the guest operating system of the VM ends up being the virtualization administrator in shielded VM scenarios, not the owner of the host infrastructure as would be the case with typical standard virtualization deployment. This makes shielded VMs a perfect choice for domain controllers, certificate services, and any other VM running a workload with a particularly high business impact.
This transfer of virtualization administrator capabilities begs the question of what to do, then, when a VM is borked and you can no longer access it over the network. This is what the “repair garage” is for. An administrator can park a broken VM inside another shielded VM that is functional and use nested virtualization (Hyper-V within Hyper-V) to run it, connect to the shielded repair garage over RDP like any other shielded VM, and make repairs to the nested broken VM within the safe confines of the shielded garage VM. Once repairs are complete, the fabric administrator can back the newly repaired VM out of the shielded repair garage and put it back onto the protected fabric as if nothing had happened.
The guarded fabric can run in a couple of modes: First, to make initial adoption simpler, there is a mode where the fabric administrator role is still trusted. You can set up an Active Directory trust and a group in which these machines can register, and then you can add Hyper-V host machines to that group to gain permission to run shielded VMs. This is a weaker version of the full protection, since the fabric administrator is trusted and there are no hardware-rooted trust or attestation checks for boot and code integrity.
The full version is when you register each Hyper-V host’s TPM with the host guardian service and establish a baseline code integrity policy for each different piece of hardware that will host shielded VMs. With the full model, the fabric administrator is not trusted, the trust of the guarded hosts is rooted in a physical TPM, and the guarded hosts have to comply with the code integrity policy for keys to decrypt the shielded VMs to be released.
Other notes about how shielded VMs behave and requirements for running them:
  • Guarded hosts require you to be running Windows Server 2016 Datacenter edition—the more expensive one, of course. This feature does not exist in Standard edition.
  • Windows Nano Server is not only supported in this scenario, it is recommended. Nano Server can be both the guest operating system within a shielded VM as well as handle the guarded Hyper-V host role as well as run the HGS. Nano Server is a great lightweight choice for the latter two roles, in my opinion.
  • Shielded VMs can only be Generation 2 VMs, which necessitates that the guest operating systems be Windows 8 and Windows Server 2012 or newer (including Windows 10, Server 2012 and R2, and Server 2016.
  • Contrary to what you might think, the vTPM is not tied to physical TPM on any particular server. For one, dividing up a physical TPM securely would be a real challenge. Secondly, the TPM has to move with the VM so that shielded VMs maintain all of the high availability and fault tolerance capabilities (Live Migration and so on) that regular VMs have.

The last word

The rush to virtualize all things has left a key attack vector virtually unprotected until now. Using shielded VMs adds a super layer of security to the applications that you have right now, even those that are running on Linux. Think of shielded VMs as the anti-Edward Snowden -- protection against the rogue administrator. It could make Windows Server 2016 easily worth the price of admission for your business.