IBM

IBM is a leading provider of global hybrid cloud and AI, and consulting expertise. We help clients in more than 175 countries capitalize on insights from their data, streamline business processes, reduce costs and gain the competitive edge in their industries. More than 4,000 government and corporate entities in critical infrastructure areas such as financial services, telecommunications and healthcare rely on IBM's hybrid cloud platform and Red Hat OpenShift to affect their digital transformations quickly, efficiently and securely. IBM's breakthrough innovations in AI, quantum computing, industry-specific cloud solutions and consulting deliver open and flexible options to our clients. All of this is backed by IBM's long-standing commitment to trust, transparency, responsibility, inclusivity and service.

Menu
Deploying Generative AI in an Enterprise Setting: Opportunities and Challenges

Deploying Generative AI in an Enterprise Setting: Opportunities and Challenges

By Ross Farrelly, PhD

Credit: IBM

The rise of generative AI

Over the last decade, the power of AI has grown exponentially. Driven by the ubiquity of detailed, fine-grained data, sophisticated algorithms and access to relativity cheap compute power via the cloud; the scope and power of AI to solve real world problems has never been greater.

Language translation, image and text classification, and speech-to-text conversion are some of the common and narrow problem spaces which were fully or partially solved by predictive or prescriptive AI. Predictive AI helps us understand what the future will look like. Prescriptive AI help us deploy limited resources within certain constraints.

“AI is no longer an afterthought. Forward thinkers are reimagining critical workflows with AI at its core.”

Both methods have been helping companies such as KPMG, PwC and Slack be more efficient and enabled knowledge workers to create new content quickly and make informed decisions to meet business goals. We are now seeing the emergence of broad AI capabilities built on the top of these narrow AI solutions, to address much more complex and high value tasks.

AI is no longer an afterthought. Forward thinkers are reimagining critical workflows with AI at its core. A survey found that 50% of CEOs are already integrating generative AI into digital products and service. Project Debater is an early example of generative AI that emerged from IBM’s foray in Deep Blue and Jeopardy. IBM Research debuted Project Debater in 2018 and it became the first computer system that was capable of debating humans on complex topics, build persuasive arguments and make well-informed decisions.

Creating more intelligent operations with AI and automation is increasingly urgent with over 85 million jobs to be unfilled by 2030. Generative AI presents a compelling opportunity to augment employee efforts and make the enterprise more productive. The combination of a neural network architecture to machine learning techniques has enabled the emergence of large foundation models that outperform existing benchmarks capable of handling multiple data modalities. However, the same survey also found that 57% of CEO are concerned about data security, and 48% worry about bias and data accuracy.

From traditional AI to generative AI

One of the major barriers to developing high quality predictive models is the acquisition of a verified training set. This training set needs to have the “ground truth” with which the model can be trained. The unique insight which has fuelled the development of generative AI is that any sentence, line of code or in fact any data in which the order of the information is meaningful, can be used as a training set for a model.

All that needs to be done is to split the sequence of data into a training portion and a test portion and train the model to predict the missing part. What this means is that suddenly vast swathes of data on the internet and on sites such as Wikipedia can be used to train generative AI models.

Nevertheless, training a large langue model is not trivial. Vast amounts of data need to be collected, cleansed, pre-processed and tokenised. The actual training, testing and fine tuning of the model requires access to extremely high-speed computer power which is beyond the reach of most organisations.

However, once trained, these models can be extremely useful and are able to complete a wide range of tasks. Generative AI models have been used to compose sentences and paragraphs, write computer programs, create images, and design new pharmaceuticals. Early instances can be seen at Mitsui Chemicals, Bouygues Telecom and Insilico Medicine that have helped save time, resource and labour in the process.

Generative AI for the Enterprise

“Generative AI offer new opportunities for reducing the burden of digital labour and augmenting certain aspects of content creation.”

Clearly, generative AI offer new opportunities for reducing the burden of digital labour and augmenting certain aspects of content creation. However, using generative AI in a regulated enterprise setting does raise significant challenges such as:

  • Right access, right data: For generative models to be useful in an enterprise setting, they need to be trained on the proprietary data which resides behind an organisation’s firewall. Consider this scenario. Suppose a manager at a large corporation wants to use generative AI to write a performance report for a sales person for the first half of 2023. A generative AI model trained on publicly available data will not be able to do this. To complete this task, the model would need to have visibility of confidential data within the organisation such as achievement data, sales target data, customer NPS data and the record of training courses and badges completed by the employee.
  • Governance: When deploying generative AI within a corporation, there needs to be a governance layer monitoring the use of these technologies to ensure they are being used in accordance with both the company’s policies and any regulatory guidelines which may apply to that industry. The importance of governance increases as governments turn their attention to legislating the use of AI. Very recently, the EU Parliament approved the first draft of the AI Act which includes references to the use and governance of generative AI. The impact of the legislation, while is largely welcome, does have far reaching consequences which are yet to be seen.
  • Social impact: The social impact of generative AI to the enterprise workforce is also a concern that needs careful consideration. Will knowledge workers welcome the productivity gains offered by such technology or will they feel their cognitive contribution to the business is being diminished and that their roles are under threat? Such questions are hard to answer at this stage as the technology is still in its infancy.

Generative AI for the enterprise – the way forward

AI has been around since the 1950’s and IBM was among the early pioneers in the field. IBM has been working with corporations to address these challenges and recently announced watsonx, an enterprise-ready AI and data platform comprising three powerful components:

  • the watsonx.ai studio for new foundation models, generative AI and machine learning
  • watsonx.data, a fit-for-purpose store for the flexibility of a data lake and the performance of a data warehouse
  • The watsonx.governance toolkit, to enable AI workflows that are built with responsibility, transparency and explainability

Using the watsonx platform, corporations have three options when it comes to foundation models.

  • They can leverage prebuilt IBM curated models which are designed to ensure model trust and efficiency in business applications.
  • They can experiment with multiple open-source models, to identify the best models for their needs through IBM’s partnership with Hugging Face.
  • They can use watsonx.ai to train and tune their own models using IBM’s advanced prompt-tuning capabilities, full SDK and API libraries.

The future forward

The promise of generative AI is front of mind for executives in every industry at the moment. Businesses, small medium and large, are looking to take advantage of this exciting new technology and realised the productivity gains it offers. In doing so, they need to be aware of the risks and wider implication of using such technology and ensure they it is deployed in a governed, trustworthy and reliable manner which enhances the corporation’s position in the market rather than undermining it.

Credit: IBM

About the author

Ross Farrelly is the Segment Leader of Data Science and Artificial Intelligence for IBM Asia Pacific. He has extensive consulting experience in large-scale big data implementations in telcos, financial services and government agencies in Australia, New Zealand and South East Asia where he helped develop and execute strategies to adopt and realise the benefits of predictive analytics and AI. Ross has a Master in Applied Statistics, a Master in Applied Ethics, a first class honours degree in Pure Mathematics and a PhD in Information Systems. 

IBM watsonx, an enterprise-ready AI and data platform was rolled out on 11 July. Also available are new GPU offerings on IBM Cloud, an AI-tailored infrastructure designed to support enterprise compute-intensive workloads, to address the demand for foundation models. Later this year, IBM is expected to offer full stack high-performance, flexible, AI-optimized infrastructure, delivered as a service on IBM Cloud, for both training and serving foundation models.


Follow Us

Join the newsletter!

Or

Sign up to gain exclusive access to email subscriptions, event invitations, competitions, giveaways, and much more.

Membership is free, and your security and privacy remain protected. View our privacy policy before signing up.

Error: Please check your email address.

Tags generative AI

More about AustraliaBouygues TelecomEastEnterpriseEUIBMIBM CloudKPMGLeaderMitsuiPureSegmentTalentWikipedia

Show Comments