The ultimate AI glossary: Artificial intelligence definitions to know

Active learning: active learning is an AI approach that efficiently combines aspects of supervised and unsupervised learning. The AI model identifies patterns, determines what to learn next, and seeks human intervention only when necessary. This results in a quicker and more precise specialized AI model, which is ideal for businesses intending to adopt reliable and efficient AI.

AI alignment: AI alignment is a subfield of AI research and training focused on aligning AI system objectives with those of its designers and/or users. This can involve both ensuring the AI achieves the desired goals and enabling AI systems to incorporate the values and ethical standards of their creators and/or users when making decisions.

AI hallucinations: AI hallucinations are incorrect or misleading outputs generated by AI systems. These errors are caused by a variety of factors, including insufficient or biased training data or incorrect assumptions made by the model.

AI-powered automation: AI-powered automation, or "intelligent automation" refers to the augmentation of rules-based automation technologies such as robotic process automation (RPA), with AI capabilities such as machine learning algorithms (ML), natural language processing (NLP), and computer vision (CV), in order to emulate a wide range of the decision making and problem solving that people do and thus expand the range of work that can be automated. Companies that strategically combine automation and AI across their business processes enhance employee productivity, customer experience, and drive rapid and agile digital transformation.

AI usage auditing: an AI usage audit is a comprehensive review of your AI program to ensure it's meeting set goals, adhering to the standards you've set, and complying with all legal requirements. Just as a regular health check-up ensures your well-being, an AI usage audit is fundamental to confirming that the system is performing accurately and ethically.

Artificial general intelligence (AGI): artificial general intelligence is a theoretical AI system with the same intellectual capacity and adaptability as a human. It refers to an AI system that could essentially match human skills and capabilities. This is largely considered a future concept, with many experts predicting we are decades or even centuries away from achieving true AGI.

Artificial intelligence (AI): artificial intelligence refers to computer systems capable of performing complex tasks that only humans could do historically, such as reasoning, decision making, and problem-solving.

Bias: bias is a phenomenon that skews the results of AI-driven decisions in a way that disadvantages an idea, objective, or group of people. This error typically occurs in AI models due to insufficient or unrepresentative training data.

Confidence score: an AI confidence score is a probability score indicating the AI model's level of certainty that it has performed its assigned task correctly.

Conversational AI: conversational AI is a type of AI system that simulates human conversation, leveraging various AI techniques such as natural language processing and generative AI. It can also be enhanced with image recognition capabilities.

Cost control: cost control is a process that allows you to monitor your project's progress in real-time. Tracking resource utilization, analyzing performance metrics, and identifying potential budget overruns before they escalate allows you to take action to keep your project on track and within budget.

Data annotation (or data labeling): data annotation, also known as data labeling, is the process of marking a dataset with the specific features you want an AI model to learn and recognize.

Deep learning: deep learning is a subset of machine learning that uses multi-layered neural networks (also known as "deep neural networks") to simulate humans' complex decision making processes.

Enterprise AI: enterprise AI is the combination of artificial intelligence—the ability for a machine to learn, understand, and interact in a very human way—with software designed to meet organizational needs. Enterprise AI has to respect strict enterprise governance, compliance, and security rules. The UiPath AI Trust Layer, along with human in the loop and rule-based workflow capabilities available via the UiPath Business Automation Platform™, enables enterprise AI.

Foundational models: foundational models learn from a vast range of data and can be fine-tuned for specific tasks, making them highly versatile. This adaptability reduces the need for building separate models for each task, making them a cost-effective option. Various techniques, like retrieval augmented generation (RAG) and more advanced methods, are employed to bridge the gap between foundational AI's general knowledge and the precision required by specialized AI models.

Generative AI: generative AI is a type of artificial intelligence that can create new content, including text, images, audio, and synthetic data. Synthetic data is artificially created to resemble real data but doesn't copy actual real-world details. This technology learns from large amounts of existing data and is designed to generate new, unique content that resembles the original data but is distinctly different.

Generative AI feature governance: generative AI feature governance refers to the set of principles, policies, and practices that are specifically designed to encourage and ensure the responsible use of generative AI technologies across the entire organization. This ensures their use aligns with both the organization's values and broader societal norms.

Generative annotation: generative annotation leverages the power of generative AI to streamline the labeling or annotation of datasets. While it's commonly used as a form of pre-labeling, a human is still needed to determine what the final annotations should be.

Generative classification: generative classification is the use of AI and natural language queries to classify information formats like documents or communications.

Generative extraction: generative extraction is the use of AI and natural language queries to accurately understand and extract data from a particular type of information, like a document or message.

Generative validation: generative validation is the use of generative AI to review and validate specialized AI model outputs. While it can't replace a human in the loop, it can reduce the workload of human reviewers by automating the review of cases.

Harmful content filtering: harmful content filtering serves as a protective shield within AI systems. It's a method designed to detect and filter out harmful content that focuses on four primary categories—hate speech, sexually explicit material, violence, and content related to self-harm. It grades these violations based on severity level (safe, low, medium, and high), and plays a vital role in fostering a safer and healthier digital environment.

Human in the loop (HITL): human in the loop is a feedback process where a human (or team of humans) provides a critical review of the output of an AI model. This collaboration is essential both for improving AI model training and acting as a safeguard to verify AI decisions before they impact real-world outcomes.

Intelligent document processing (IDP): intelligent document processing is a technology that extracts data from diverse document types (including forms, contracts, and communications like emails) to automate and analyze document-based tasks. IDP harnesses multiple types of AI, including natural language processing and computer vision, to extract data from structured, semi-structured, and unstructured content.

Large language model (LLM): a large language model is a type of AI technology that can understand and create text-based content. It's expertly trained using vast amounts of data (hence why it's called "large") and based on machine learning principles. Employing a specialized kind of neural network known as a transformer model, LLMs are a key component of modern AI technologies that are making remarkable contributions to language understanding and generation.

LLM gateway: the large language model gateway serves as a vital bridge between the user and the LLM service. Along with directing requests to the service and managing responses, it boosts the usefulness and efficiency of LLM exchanges by carrying out crucial post-processing tasks. This gateway guarantees that the LLM aligns with AI application best practices by making sure that it's used effectively, safely, and responsibly.

Machine learning (ML): machine learning is a branch of AI that uses data and algorithms to gradually improve the accuracy of an AI model by mimicking the way humans learn.

Model accuracy: model accuracy measures how often an AI model performs a task correctly. More technical evaluations often include the "F1 score," a metric that combines precision (the ability to avoid false positives) and recall (the ability to find all relevant instances).

Natural language processing (NLP): natural language processing is an AI technique that blends linguistic, statistical, and AI models to enable machines to recognize, understand, and generate text and/or speech.

PII and sensitive data masking: data masking of personal identifiable information (PII) is a critical security measure in the enterprise. It's a process that carefully detects and conceals sensitive data that falls under standard PII categories such as social security numbers, email addresses, and credit card numbers. This safeguard preserves the privacy and confidentiality of users' data by ensuring that machine learning processes don't inadvertently reveal or share any sensitive information.

Precision: precision refers to the accuracy of predictions made by an AI model. In other words, it's the percentage of model predictions that are correct. Generally, the higher the precision, the more relevant the model's results are.

Prompt: prompts are the inputs, queries, or requests that a user or program gives to an AI large language model to obtain a desired output. Prompts can be any combination of text and/or code, and often take the form of conversational questions or code snippets.

Recall: recall measures an AI model's ability to identify all relevant data points. In other words, it's the percentage of true positive predictions compared to the total number of actual positives. Recall is extremely important in scenarios where catching every actual positive case is vital, even at the risk of predicting some false positives. For example, in medical diagnostics, a high recall rate is essential to ensure that all potential diseases are detected, even if it means flagging some false positives that need to be investigated further.

Responsible AI: responsible AI is the practice of designing, developing, and deploying AI with good intentions to empower employees and businesses while fairly impacting customers and society. This approach builds trust and allows companies to scale their AI initiatives with confidence. By prioritizing responsible AI practices, we can ensure that this powerful technology is a force for good in the world.

Retrieval augmented generation (RAG): retrieval augmented generation is a technique for enhancing the accuracy and reliability of generative AI by leveraging data, or "context," fetched from external sources.

Semi-supervised learning: semi-supervised learning is a subset of machine learning. It combines supervised and unsupervised learning techniques by using both labeled and unlabeled data to train AI models.

Specialized AI: specialized AI refers to artificial intelligence systems designed to perform specific tasks or solve particular problems within a narrow domain or scope. Think of it like your own personal expert that's designed to do one job extremely well. Because it's programmed to focus on a specific task or address a particular problem, it can go deep into the subject and provide high levels of accuracy and efficiency. It's typically less expensive to run (because of its smaller footprint) and produces more accurate outputs, but does require training to maximize accuracy.

Supervised learning: supervised learning is a subset of machine learning. It's characterized by the use of labeled datasets to train AI models to accurately predict outcomes.

Taxonomy: in AI model training, a taxonomy is a classification system that organizes labels or tags used for data annotation. These labels are used to train the model to understand and learn various patterns, trends, and outcomes. The taxonomy structures different classes of information by putting them in a hierarchy (for example: a customer complaint is a type of email, and an email is a type of communication), which provides clarity and accuracy during the training process.

Transformer: a transformer model is a type of AI technology that learns meaning by tracking relationships in sequential data, such as words in a sentence or numbers in a sequence. These models apply mathematical techniques called 'attention' or 'self-attention' to detect subtle trends and relationships between data points.

Unsupervised learning: unsupervised learning is a subset of machine learning. It uses machine learning algorithms to analyze and cluster unlabeled datasets without human input.

Vector database: a vector database is a collection of data stored as vectors. Vector databases make it easier for AI models to remember previous inputs and user prompts. This enables AI in search, recommendations, and text generation use cases.

Vector: a vector is a series of numbers created by an AI model that represent words, images, videos, and audio. Vectors are crucial for helping AI models understand meaning and context.

Unleashing Tomorrow's Technology, Today
Web Wiz Information Technology
Unleashing Tomorrow's Technology, Today
Web Wiz Information Technology
Unleashing Tomorrow's Technology, Today
Web Wiz Information Technology
Unleashing Tomorrow's Technology, Today
Web Wiz Information Technology
Unleashing Tomorrow's Technology, Today