Cloud – Global Tech News https://g-technews.com Thu, 13 Apr 2023 13:03:51 +0000 en-US hourly 1 https://wordpress.org/?v=6.5.2 https://g-technews.com/wp-content/uploads/2023/03/favicon2.png Cloud – Global Tech News https://g-technews.com 32 32 How to Update Apps in Preparation for a Cloud Migration https://g-technews.com/2023/04/10/how-to-update-apps-in-preparation-for-a-cloud-migration/ https://g-technews.com/2023/04/10/how-to-update-apps-in-preparation-for-a-cloud-migration/#respond Mon, 10 Apr 2023 10:43:16 +0000 https://g-technews.com/?p=1156 As part of a cloud migration, evaluate your applications and upgrade them as needed. Discover the advantages of this strategy and be aware of any potential hazards.

There’s no secret sauce to application modernization; organisations must effectively plan ahead when they begin the cloud migration process.

Application modernization is a key consideration when switching from on-premises systems to the public cloud, whether the impetus is a cloud-first directive or a push from within IT to digitise legacy systems. It can reduce operational overhead, replace obsolete technologies and improve user experience.

There’s no secret sauce to application modernization; organisations must effectively plan ahead when they begin the cloud migration process.

Application modernization is a key consideration when switching from on-premises systems to the public cloud, whether the impetus is a cloud-first directive or a push from within IT to digitise legacy systems. It can reduce operational overhead, replace obsolete technologies and improve user experience.

Modernizing apps 101

If your workforce lacks the essential cloud skills, you’ll need to employ a supplier to take a methodical approach to app modernisation. Dashboards, status meetings, and live demos are just a few examples of the various reporting options the provider should provide. These channels provide a way for IT teams to approve the following steps and receive feedback.

For these services, large enterprises turn to companies like Accenture, DXC, and Deloitte. Small- to medium-sized businesses frequently use local businesses or individual consultants. To keep up with the escalating demand, public cloud providers are also expanding their professional services sections. Whatever of the strategy you choose, make sure you conduct your research and get references from previous clients.

As part of a cloud migration: 

There are three stages to modernising apps.Businesses profile and examine applications during this phase. In order for the application to function successfully in its new environment, the migration team must evaluate the existing state of applications and future state requirements. For a variety of reasons, such as a lack of a business case, compliance requirements, or economic considerations, some apps aren’t intended to travel to the cloud. Application retirement services must be taken into consideration by the systems integrator or internal team to help remove outdated programmes and their dependencies.

During this phase, your migration team should collaborate with your business users as well as the IT and security teams. There are also other analysis services available, like security and compliance.

The actual labor-intensive effort happens during the application migration phase. Large cloud service providers and independent suppliers provide a variety of migration services, including physical transfer devices and direct network links. Other migration techniques include rehost, refactor, revise, rebuild, and replace. Apps are moved to the cloud in their current state via rehost, often known as lift-and-shift. Although it has traditionally been used the most, new approaches are starting to overtake it. The lift-and-extend strategy of today moves an application to the cloud while also replacing some of its components with cloud services. To better connect with other native AWS services, a business might, for instance, switch from an on-premises MySQL database to Amazon Aurora.

reconciling speed with reality while using outdated systems

Whether it’s due to budget restrictions, business strategies, or even internal politics, corporate realities will inevitably have an impact on cloud migration and app modernization efforts. As you plan your transition, keep this in mind and keep in mind that everyone needs to buy into the procedure.

The difficulties associated with severing legacy apps from their existing infrastructure can also delay the rate of development. Before moving such databases to the cloud, administrators might need to brush up on their expertise of Sybase SQL Server, for instance.

Also, fundamental distinctions between on-premises data centres and the cloud will need to be taken into account by cloud architects.

]]>
https://g-technews.com/2023/04/10/how-to-update-apps-in-preparation-for-a-cloud-migration/feed/ 0
Choose Between Modernising Apps and Cloud-Native Apps https://g-technews.com/2023/04/10/choose-between-modernising-apps-and-cloud-native-apps/ https://g-technews.com/2023/04/10/choose-between-modernising-apps-and-cloud-native-apps/#respond Mon, 10 Apr 2023 10:42:03 +0000 https://g-technews.com/?p=1153 Review the advantages and disadvantages of each strategy before deciding if you want to update current workloads as part of a move or whether you’re leaning towards a cloud-native architecture.

The majority of businesses are now aware that moving an application to the cloud in its current state won’t result in any noticeable advantages. To fully utilise the technology, they should investigate cloud-native apps or upgrade current workloads.

They must first comprehend the distinction between modernised and cloud-native applications, though. The most effective strategy is determined by the workload, the skill levels of the IT teams, and the cloud capabilities they value the most.

1.Flexible software implementation :  

Modernizing an application is not a good option for all workloads. Often, it is impossible to modify third-party software. Theoretically, open source programmes can be modified, but doing so involves a steep learning curve and numerous development jobs. Yet third-party software’s modernization is frequently restricted to modifying the front ends.

Moreover, internal software modifications can be challenging, especially if the programme is older. The source code and documentation might be out-of-date, and finding programmers with the necessary expertise to update apps might be challenging. In this case, modernising what can be effectively updated for the cloud would be preferable.

2.Hybrid boundary positioning: 

Recognize where the boundary between what is hosted in the cloud and what remains in the data centre needs to be drawn. Enterprises frequently discover they must leave critical business components and related software on premises when making these decisions, which typically boil down to considerations like compliance regulations and data security. Under some circumstances, modernization or cloud-native development could not be advantageous, especially if the majority of the application’s functionality outside of the front end is not cloud-ready.

If an on-premises application has loosely connected features and few data transfers, some of its fundamental functionality might be redesigned to be cloud-native.

Due to the frequent access to and updating of databases, high-volume back-end transaction processing is substantially more complex than front-end processing. The majority of businesses have already developed or purchased the apps needed to manage such activities in the data centre.

3. The application load’s characteristics:

Scaling and spreading application operations are strengths of the cloud. Hosting your application on the cloud might not offer many advantages if it is only sometimes utilised. The cloud will, however, be useful for updating programmes that are continuously and heavily used. The optimum result is achieved by transferring as much of the application in cloud-native form as possible for large and highly variable workloads.

A workload that relies on transactions inputted by a small group of employees is a poor option for the cloud. The workforce size would set a limit on the transaction volumes, which most likely wouldn’t change significantly over time. The cloud’s scaling capability would be of little use for this kind of programme. And while containerization would be advantageous, neither microservices nor cloud-native techniques would be necessary.

Consider the advantages of each strategy before choosing one. Applications that aren’t closely tied to the status of an account or an inventory item are ideally suited for cloud native since you may scale or replace them whenever you want. Modernized cloud apps benefit from the scalability and resilience of the cloud, but they are more akin to traditional transaction processing. Your cloud strategy will be greatly influenced by where on that spectrum your needs fall.

]]>
https://g-technews.com/2023/04/10/choose-between-modernising-apps-and-cloud-native-apps/feed/ 0
The Cloud Market is Being Shaken by AI-Driven Development Trends https://g-technews.com/2023/04/10/the-cloud-market-is-being-shaken-by-ai-driven-development-trends/ https://g-technews.com/2023/04/10/the-cloud-market-is-being-shaken-by-ai-driven-development-trends/#respond Mon, 10 Apr 2023 10:41:08 +0000 https://g-technews.com/?p=1150 Cloud vendors are racing to modify their systems in order to fulfil the growing demand from businesses for AI. Check out the projects being worked on to prepare for the upcoming wave of app development.

A significant overhaul of the computer infrastructure, starting with system microprocessors and continuing all the way up to workplace applications, is being compelled by the increasing strength and prevalence of AI and machine learning software.

Major public cloud providers like AWS, Microsoft, and Google are pursuing more AI-driven development avenues to meet the demands of the constantly shifting cloud industry and provide businesses different options. Due to increased interest in AI, the way businesses create applications has drastically changed. IDC predicts that by 2025, at least 90% of new enterprise applications will use AI components. Applications that include AI as a feature have different design pillars, performance requirements, and system specifications than those that do not.

The goal of AI

The computing environment has been optimised to speed up the solution of mathematical equations. Servers, storage systems, networks, and system software were developed throughout the development of digital technology to support software that adhered to predefined, sequential processing patterns. AI computing operates in a unique way.

Karl Freund, an analyst at Moor Insights and Strategies where he works on high-performance computing and deep learning, claimed that “AI is not a suitable fit for systems that add numbers and search for entries in a database.” Instead, AI-based software gathers data and makes inferences. For instance, an AI programme compares patterns to distinguish between the faces of different people.

There are still barriers to AI-driven development.

According to IDC analyst Sriram Subramanian, AI is still in its infancy, and the learning curve is high. To drive the creation and usage of these technologies, an organisation often has to hire a variety of staff members, including data scientists and people with strong backgrounds in statistics and mathematics.

There is no assurance that AI-based software will function effectively, despite the amazing brainpower behind it. According to IDC, up to 50% of AI project failures are reported by one in four businesses. The failures have occasionally been rather stunning. For instance, the MD Anderson Cancer Center spent almost $60 million on a study to use IBM’s Watson to enhance patient diagnosis, but ultimately abandoned it.

Data collection and analysis are two obstacles to AI-driven progress. Corporate applications frequently classify data differently. It might be difficult to reconcile differences when an item, such a customer’s address, is listed one way in one system and another another. It doesn’t help that the data models are often big—there might be petabytes of data—and it’s difficult to just upload and store all of it.

When this happens, developers frequently turn to various tools and services to reduce some of the pressure. Yet, altering company information is frequently time-consuming and error-prone due to the complexity of the tools for developing AI and machine learning applications.

]]>
https://g-technews.com/2023/04/10/the-cloud-market-is-being-shaken-by-ai-driven-development-trends/feed/ 0
Is It Time to Think About a Cloud Departure Plan? https://g-technews.com/2023/04/10/is-it-time-to-think-about-a-cloud-departure-plan/ https://g-technews.com/2023/04/10/is-it-time-to-think-about-a-cloud-departure-plan/#respond Mon, 10 Apr 2023 10:40:14 +0000 https://g-technews.com/?p=1147 In most circumstances, cloud repatriation—or a cloud exodus, as it is frequently called—is probably not the best option. Yet occasionally, it really is the proper thing to do.

Although the shift to cloud computing is still expanding steadily, there are occasions when an organisation chooses to leave the cloud or bring some workloads back on-site. Although it’s not common, a cloud exit is a means for companies to take back control of their spending and manage workloads that they think aren’t performing well in a public cloud environment.

According to Hyoun Park, CEO and lead analyst at Amalgam Insights, “Workloads expand in volume and complexity [in the cloud] and demand additional services to support.” Even with generally regular workloads, this can cause an organization’s costs to treble or triple quickly. The fact that Amazon generates operating income over $2 billion per quarter is not by chance, according to Park.

Managing a cloud departure

According to Medford, a cloud departure is merely a reverse of the initial cloud migration. Sadly, putting a cloud exit strategy into action isn’t always simple. According to him, repatriation is always handled on a case-by-case basis and there is no magic formula.

Yet, switching to an on-premises environment when an application is created, at least in part, to operate in the cloud entails abandoning all of that work and returning to the original architecture and design, according to Medford.

Sokolov sees five main aspects that should be considered in each cloud exit strategy, despite the fact that each reversal is distinct and necessitates specific methods.

  1. 1.Make a careful plan: “The process of returning to an on-premises system takes time. You must thus create a thorough, step-by-step plan and prepare for a number of problems “explained Sokolov.
  1. 2.Plan your deployment:  To ensure that your application deploys quickly and reliably, automate deployment and run a thorough test.
  1. 3.Consistently transfer data: Every move entails that data will occasionally reside both locally and in the cloud. You must ensure that your app can address these delays because they frequently result from situations like these, according to Sokolov.
  1. 4.Be in command: Prepare to handle all tasks that formerly fell under the purview of a cloud provider, such as infrastructure and security
  2. 5.Support it:  “You need to have a backup strategy that can shield your company from such insignificant but fatal mishaps as a power outage.” According to Sokolov, a hybrid computing model may be useful.
]]>
https://g-technews.com/2023/04/10/is-it-time-to-think-about-a-cloud-departure-plan/feed/ 0
Effective Planning for Cloud Migration Prevents Downtime https://g-technews.com/2023/04/10/effective-planning-for-cloud-migration-prevents-downtime/ https://g-technews.com/2023/04/10/effective-planning-for-cloud-migration-prevents-downtime/#respond Mon, 10 Apr 2023 10:24:44 +0000 https://g-technews.com/?p=1143 Are you able to use your on-premises application in the cloud? You can prevent downtime during a cloud move with the proper planning.

There are many options available to organisations for moving applications and data to the public cloud. But a transfer frequently involves downtime, which may be annoying for users and expensive for the company.

Plan your strategy

The choice of programmes (and related data) that may be moved individually is a crucial part of cloud migration strategy because it minimises interruption for users and the business.

Identifying workload dependencies and getting ready to migrate those dependencies first are also essential. Otherwise, there may be unanticipated disruption and downtime since the workload may not function after migration properly or may not function at all.

For instance, moving a workload to the cloud and expecting it to continue using an internal database is impractical if the task depends on access to a database. Alternatively, carry out a database replication or migration prior to moving the workload. A workload probably has a lot of other dependencies to take into account, like mechanisms for backup and disaster recovery and application performance monitoring.

Safeguard data assets

Planning for data protection should be a part of any organization’s cloud migration strategy. A task can continue to run even if the primary data set is compromised because to robust data protection.

Before starting the transfer procedure, make a backup of the current local data set. In the unlikely event that issues with a migration process compromise data integrity or continuity, this offers a second working duplicate of the data. The secured copy of the data set can be simply restored, and the associated application can still run locally if necessary. By doing this, you can reduce downtime while you look into and fix any migration issues.

huge data files loaded beforehand

Any approach that is really used to move workloads and data to the public cloud could significantly raise the risk of disruption due to the execution time. Even with aggressive migration strategies like master-replica or multi-master migrations, the potential requirement to quiesce, or pause, an application or data set to ensure its copy continuity may result in downtime. That might not be acceptable for large multi-terabyte or petabyte data bundles delivered over a common public internet connection.

Think about application monitoring.

It makes sense for your cloud migration planning to centre on moving workloads from on-premises to the cloud, but this is insufficient. To guarantee and sustain user needs and corporate SLAs, the workload must continue to operate within allowable performance boundaries.

Cloud workloads can use application performance monitoring (APM) technologies like New Relic APM, AppDynamics SaaS, Datadog APM, and others to gather, track, and report on important application performance indicators. When performance difficulties arise, application stakeholders may rapidly identify the problems and begin efficient troubleshooting to fix any issues. APM data can indicate that a migration is successful.

]]>
https://g-technews.com/2023/04/10/effective-planning-for-cloud-migration-prevents-downtime/feed/ 0
Fix IP Address Problems in Cloud Migration https://g-technews.com/2023/04/10/fix-ip-address-problems-in-cloud-migration/ https://g-technews.com/2023/04/10/fix-ip-address-problems-in-cloud-migration/#respond Mon, 10 Apr 2023 10:21:52 +0000 https://g-technews.com/?p=1140 Many businesses fail to consider potential problems with IP addresses when moving to the cloud. Learn how cloud service providers handle address assignment to start.

In the process of moving workloads from a local data centre to the public cloud, businesses frequently ignore IP address problems.

The IP address ranges and availability are normally constrained by your cloud provider and the usage of your cloud instance, thus the address your workload uses in the public cloud differs typically from the address it uses on-premises and is frequently dynamic.

For instance, it would be ideal for the public cloud instance to adopt the in-house service’s IP address when migrating an internal service, such as a web server, to a public cloud, such as Amazon Web Services. But, this just doesn’t happen. Instead, a limited number of internal and external IP addresses, including virtual IP addresses for services and instances, are available from public cloud providers and are used to apply workloads. An on-premises IP address cannot easily be made to route correctly to a cloud provider by users.

The fact that IP addresses are often assigned dynamically compounds the problems with IP addresses. When a workload restarts after an instance has stopped, the cloud IP address may have changed. The difficulty of all of this increases when other workloads require access to the recently relocated workload. For instance, if the newly relocated workload is a database or another back-end system that other workloads depend on, those other services are now inaccessible since they can no longer find the relocated service at its new IP address.

But, there are a few methods that help ease this transition and avoid major IP address issues.

First, users in public clouds typically have some control over the IP address assigned to their instances. In order to guarantee that the address won’t change while the instance reboots, cloud providers frequently offer static IP assignments for customer-facing services like a web server. Even if the final address is unlikely to match your initial on-premises IP address, you can choose an IP address and rest assured that it won’t change throughout the instance.

There are, nevertheless, a number of strategies that can make this move easier and prevent serious IP address problems.

First, the IP address given to users’ instances in public clouds is usually something they may influence. For instance, some customer-facing services, like a web server, need to have a static IP address, and cloud providers typically provide for static IP assignments to ensure that the address won’t change as the instance reboots. You can choose an IP address and be sure that it won’t change for the duration of the instance, even though the final address is unlikely to match your original on-premises IP address.

]]>
https://g-technews.com/2023/04/10/fix-ip-address-problems-in-cloud-migration/feed/ 0
5 Tips for Cloud Application Re-Architecture https://g-technews.com/2023/04/10/5-tips-for-cloud-application-re-architecture/ https://g-technews.com/2023/04/10/5-tips-for-cloud-application-re-architecture/#respond Mon, 10 Apr 2023 10:19:26 +0000 https://g-technews.com/?p=1137 When you just relocate a programme as is, it’s challenging to fully realise the advantages of cloud computing. Consider redesigning your app instead, and begin with these five steps.

Here are five crucial considerations for people who need to restructure current apps for the cloud.

1. Equalize the workload and application components

It’s normally desirable to divide apps into numerous components in order to optimise them for the cloud, but doing so can drastically raise costs. The expenses increase because you must individually host several components. Moreover, network connections are required to connect application components into workflows, which might increase the cost. Review the advantages before you begin and decide whether the expense is justified.

Determine whether specific application features are always used and scaled together if it is.

2. Cloud resilience and scalability aren’t automatic

Popular cloud features include the capacity to burst workloads from the data centre to the cloud, spin up fresh application copies to boost performance, and replace broken components. Yet without proper application design, you can’t achieve them. The majority of apps are stateful, which means they respond to requests that include a series of actions. A new application or component copy that is spun up might not be aware of its place in the sequence, leading to errors or failures.

One of the most challenging aspects of re-architecting apps for the cloud is state control. Make careful you manage multistage transactions in accordance with their point of origin, the user interface. You have complete control over this now.

3. Be ready for multi- and hybrid clouds

Most cloud apps require a connection to an enterprise data centre in order to update mission-critical databases that cannot be moved to the cloud due to corporate compliance and governance requirements. To lessen the risk of downtime, many public cloud users opt for a multi-cloud strategy. You must create applications for one of these hybrid or multi-cloud architectures if you don’t want to potentially run into problems with cost, performance, and reliability.

Because practically all cloud providers charge for traffic entering and leaving their cloud, hybrid or multi-cloud apps increase workflow and scalability difficulties. This implies that you’ll probably have to pay additional fees if you need to scale or replace application components beyond cloud provider limits.

4. Utilize the online services offered by cloud providers carefully.

Several useful web services are available from cloud providers, some of which can drastically simplify and reconfigure applications that migrate to the cloud. On the other hand, these services might potentially push up the price of applications and even introduce unintended expenses for networking and workflow mobility.

Classify your application when evaluating these online services. In general, cloud applications are either front-end programmes that work with a range of user devices or event-driven programmes with a machine-to-machine information source. Both of these models are supported by services from cloud providers, but these services are comparable to middleware tools for bespoke development. Most of the time, the upfront cost of those middleware solutions will be less than the continuing fees for the web services.

5. Strive for uniformity in the development platform

The OS or middleware version you choose will probably have an impact on how your apps behave, and it will undoubtedly have an impact on how you design new applications for the cloud. It may be challenging to maintain your platform tools at version levels that are compatible with your application components’ middleware, which could lead to application failures.

]]>
https://g-technews.com/2023/04/10/5-tips-for-cloud-application-re-architecture/feed/ 0
Pros and Cons of Using GPU Instances in Cloud Computing https://g-technews.com/2023/04/10/pros-and-cons-of-using-gpu-instances-in-cloud-computing/ https://g-technews.com/2023/04/10/pros-and-cons-of-using-gpu-instances-in-cloud-computing/#respond Mon, 10 Apr 2023 10:18:03 +0000 https://g-technews.com/?p=1134 Not all workloads — or budgets — will profit from these cloud instance types of GPU instances, despite the fact that they can provide compute-intensive apps with the processing boost they require.

Businesses are searching for new cloud instance types as AI, machine learning, and big data analytics become more widely used.

Enterprises require greater processing power to process their expanding data sets. That problem can be resolved with the aid of a graphics processing unit (GPU), which is often used for graphics-intensive applications but is also appropriate for some compute-intensive ones.

What types of tasks are benefited by GPU instances?

Not all applications or workloads are appropriate for GPU instances. Consider your application’s requirements carefully before you get started, and keep in mind that these instance types are often suitable for workloads that require a lot of computation.

Due to the parallel computing capabilities of the processors, some business analytics applications are ideally suited for GPUs, and AI applications can also be a good fit. Other compute-intensive applications, such as those used for video creation, virtual desktop infrastructure, and engineering simulation, can profit from these instance types as well. Businesses that use supercomputing for academic research can also benefit from GPUs.

What difficulties or dangers do GPU cases present?

Although deploying the technology internally might be less expensive, GPU-based cloud instances are often more expensive than their virtual CPU-based equivalents. For some IT teams, there may be a learning curve associated with the kinds of workloads that run on GPUs.

The prices for the GPU alternatives from Google, Azure, and Amazon Web Services (AWS) range from $0.70 to $0.90 per GPU per hour. In contrast, the hourly price for general-purpose, virtual CPU-based instances on AWS is $0.0058.

Moreover, high-performance computing (HPC) programmes, which often use GPUs, frequently demand specialist programming as well as familiarity with particular tools and frameworks like Apache Spark, TensorFlow, and Torch. Businesses might need to spend money on training.

What cloud service providers supply GPU instances?

The top cloud infrastructure vendors each have their own line of GPU instances. Elastic GPUs from AWS can be networked-attached to Amazon Elastic Compute Cloud (EC2) instances. The user then chooses the GPU to attach after deciding which EC2 instance type best suits the compute, memory, and storage needs of their application. AWS offers four choices with 1,024 to 8,192 MB of GPU memory.

The N-series VMs of Azure include GPU instances, which are available in two flavours: NC and NV. The Tesla K80 card from Nvidia powers the NC sizes, which are designed for network- and compute-intensive workloads. NV sizes use Nvidia’s Tesla M60 GPU card and Nvidia Grid and are designed for visualisation, gaming, and transcoding.

]]>
https://g-technews.com/2023/04/10/pros-and-cons-of-using-gpu-instances-in-cloud-computing/feed/ 0
Cloud Users Want Better Container Capabilities https://g-technews.com/2023/04/10/cloud-users-want-better-container-capabilities/ https://g-technews.com/2023/04/10/cloud-users-want-better-container-capabilities/#respond Mon, 10 Apr 2023 10:16:02 +0000 https://g-technews.com/?p=1131 Although technology is constantly evolving, it must move in the direction that users desire. Although containers are developing, many IT professionals still demand more from the technology.

Despite all the attention that containers in the cloud have received from the IT community, they nevertheless resemble Beethoven’s Ninth Symphony if he had crowdsourced it: they are rather disconnected, lack key motifs, and are undoubtedly not ready for the Berlin Opera House.

Although Kubernetes and Docker seem to have prevailed in the competition for supremacy in container technology, there is still a tonne of opportunity for development. For instance, Kubernetes deployment is still difficult, and VM-based instances and containers might communicate better.

At the touch of a button, deployment

Even while Kubernetes makes Docker container orchestration simpler, it still requires time to set up and is a complicated technology. Kubernetes employs YAML, a format that is intended to be user-friendly, for configuration. Yet Kubernetes needs certain complex YAML constructs, like text blobs.

IT manufacturers have made various efforts to resolve the difficulties with container deployment. Several of the major cloud service providers, including as AWS, Microsoft, and Google, provide managed Kubernetes services that ease the load of installation for IT teams. For its part, Red Hat purchased CoreOS and is probably going to merge OpenShift with the CoreOS managed Kubernetes distribution, Tectonic.

Improved computing

Cloud containers may also gain from more effective computing procedures. As an illustration, consider software-defined storage with container support. Techniques like compression and erasure coding function well within a single server, but you should avoid replicating data across interfaces, including the interface between containers themselves, to retain efficiency and performance. One technique to speed up operations is to affinitize services to the same server and have some means for them to share memory as the exchange of memory pointers is far faster than the use of memcopy to move a data block. Also, this would stop LAN-based copies, which raise latency.

Interfaces for storage

Although still under development, VM-level storage support for containers in the cloud will become a significant problem as nonvolatile dual in-line memory modules gain popularity and IT organisations strive to share dynamic RAM among clusters.

Although we have made progress in this area, storage providers and container software still have work to do. The overall goal is to enable storage access for VMs and containers across various cloud platforms.

]]>
https://g-technews.com/2023/04/10/cloud-users-want-better-container-capabilities/feed/ 0
Low-Code Platforms Native to the CLoud Compete with Third-Party Options https://g-technews.com/2023/04/10/low-code-platforms-native-to-the-cloud-compete-with-third-party-options/ https://g-technews.com/2023/04/10/low-code-platforms-native-to-the-cloud-compete-with-third-party-options/#respond Mon, 10 Apr 2023 10:14:01 +0000 https://g-technews.com/?p=1128 The advantages of consolidation against the risk of lock-in must be weighed by application development teams when deciding between native and third-party low-code technologies.

Some organisations find it difficult to decide whether to continue with a third-party vendor or use a native service from their cloud provider as the usage of no-code and low-code platforms increases.

Cloud-native resources

Consolidating cloud services and infrastructure may make management and monitoring easier. It may be possible to advance that plan and reduce complexity and cost by using a low-code tool that is a component of a larger cloud platform you already use.

The following are some popular public cloud providers’ no-code or low-code platforms:

Mendix on the IBM Cloud, App Creator on the Google Cloud Platform, and PowerApps on the Microsoft Cloud.

AWS also intends to create a low-code platform for its cloud.

Lock-in dangers

These native technologies do, however, carry significant risk, notably in terms of vendor lock-in. The degree of risk differs between suppliers. Low-code systems like Mendix and Microsoft PowerApps are also relatively agnostic. Users can build their own custom connectors for resources that are not officially supported, and PowerApps, for instance, offers connectors that allow apps to connect to third-party infrastructure. Although being integrated with IBM Cloud, Mendix nevertheless adheres to its autonomous, pure-play heritage.

User knowledge

In general, the technological skill required for application development with low-code solutions from cloud providers is higher. For instance, even if users of low-code platforms don’t need to be seasoned programmers, they do need to comprehend the subtleties of the APIs and databases of their cloud provider.

In third-party low-code platforms, by contrast, databases and other extra resources are created especially for use with low-code applications. This makes them more user-friendly and distinguishes them from public cloud services, which are created with a range of uses in mind and are often managed by qualified IT personnel.

]]>
https://g-technews.com/2023/04/10/low-code-platforms-native-to-the-cloud-compete-with-third-party-options/feed/ 0