Sunday 8 October 2023

Responsible AI

Google has put AI principles in place.

Many of us already have daily interactions with artificial intelligence or AI, from predictions for traffic and weather to recommendations for TV shows you might like to watch next.

As AI becomes more common, many technologies that aren't AI enabled may start to seem inadequate.

Now, AI systems are enabling computers to see, understand, and interact with the world in ways that were unimaginable just a decade ago. These systems are developing at an extraordinary pace. Yet, despite these remarkable advancements, AI is not infallible. Developing responsible AI requires an understanding of the possible issues, limitations, or unintended consequences. Technology is a reflection of what exist in society.

Without good practices, AI may replicate existing issues or bias and amplify them.

But there isn't a universal definition of responsible AI, nor is there a simple checklist or formula that defines how responsible AI practices should be implemented.

Instead, organizations are developing their own AI principles that reflect their mission and values.

While these principles are unique to every organization, if you look for common themes, you find a consistent set of ideas across transparency, fairness, accountability, and privacy.

At Google, approach to responsible AI is rooted in a commitment, to strive towards AI that's built for everyone that's accountable and safe, that respects privacy, and that is driven by scientific excellence. They have developed own AI principles, practices, governance processes, and tools that together embody values and guide the approach to responsible AI.

Responsible AI doesn't mean to focus only on the obviously controversial use cases. Without Responsible AI practices, even seemingly innocuous AI use cases, or those with good intent could still cause ethical issues or unintended outcomes, or not be as beneficial as they could be. Ethics and responsibility are important, not least because they represent the right thing to do, but also because they can guide AI designed to be more beneficial for people's lives.

In June 2018, Google announced seven AI principles to guide our work:

• One, AI should be socially beneficial.

Any project should take into account a broad range of social and economic factors, and Google will proceed only where google believe that the overall likely benefits substantially exceed the foreseeable risks and downsides.

• Two, AI should avoid creating or reinforcing unfair bias.

Google seek to avoid unjust effects on people, particularly those related to sensitive characteristics, such as race, ethnicity, gender, nationality, income, sexual orientation, ability, and political or religious belief.

• Three, AI should be built and tested for safety.

Google will continue to develop and apply strong safety and security practices to
avoid unintended results that create risks of home.

• Four, AI should be accountable to people.

Google will design AI systems that provide appropriate opportunities for feedback,
relevant explanations, and appeal.

• Five, AI should incorporate privacy design principles.

Google will give opportunity for notice and consent, encourage architectures with
privacy safeguards, and provide appropriate transparency and control over the use of data.

• Six, AI should uphold high standards of scientific excellence.

Google will work with a range of stakeholders to promote thoughtful leadership in this area, during on scientifically rigorous and multi-disciplinary approaches.

Google will responsibly share AI knowledge by publishing educational materials, best practices, and research thus enable more people to develop useful AI applications.

• Seven, AI should be made available for uses that accord with these principles.

Many technologies have multiple uses.

Google will work to limit potentially harmful or abusive applications.

That is a brief introduction about Responsible AI.

Friday 6 October 2023

Generative AI

What is Generative AI? 

Generative AI is a type of artificial intelligence that creates new content based on what it has learned from existing content. The process of learning from existing content is called training. And results in the creation of a statistical model when given a prompt, generative AI uses the model to predict what an expected response might be, and this generates new content. Essentially, it learns the underlying structure of the data.

Generative AI is a type of artificial intelligence (AI) that can create new content, such as text, images, audio, and video. It does this by learning from existing data and then using that knowledge to generate new and unique outputs.

Generative AI is used in a variety of applications, including 

1.Artificial Art: Generative AI can be used to create new pieces of art, such as paintings, sculptures, and music. 
2. Text generation: Generative AI can be used to generate new text, such as news articles, blog posts, and marketing copy. 
3. Image generation: Generative AI can be used to generate new images, such as product photos, stock photos, and marketing materials. 
4. Video generation: Generative AI can be used to generate new videos, such as product demos, training videos, and marketing videos.
5. Audio generation: Generative AI can be used to generate new audio, such as music, podcasts, and audiobooks.

A foundation model is a large AI model pretrained on a vast quantity of data that was "designed to be adapted” (or fine-tuned) to a wide range of downstream tasks, such as sentiment analysis, image captioning, and object recognition. 


Hallucinations are words or phrases that are generated by the model that are often nonsensical or grammatically incorrect.

  These are the factors that can cause hallucinations:
   The model is not given enough context. 
   The model is not trained on enough data. 
   The model is trained on noisy or dirty data. 


Prompt: A prompt is a short piece of text that is given to the large language model as input, and it can be used to control the output of the model in many ways. 

Example of both a generative AI model and a discriminative AI model: 

    A generative AI model could be trained on a dataset of images of cats and then used to generate new images of cats. 

   A discriminative AI model could be trained on a dataset of images of cats and dogs and then used to classify new images as either cats or dogs.

SSL

SSL

What is SSL and why is it important?

Secure Sockets Layer (SSL) certificates, sometimes called digital certificates, are used to establish an encrypted connection between a browser or user’s computer and a server or website.

 

SSL: SECURE SOCKETS LAYER

SSL is standard technology for securing an internet connection by encrypting data sent between a website and a browser (or between two servers). It prevents hackers from seeing or stealing any information transferred, including personal or financial data.

 

HOW DO SSL CERTIFICATES WORK?

SSL certificates establish an encrypted connection between a website/server and a browser with what’s known as an “SSL handshake.” For visitors to your website, the process is invisible — and instantaneous.


Authentication

For every new session a user begins on your website, their browser and your server exchange and validate each other’s SSL certificates.


 

Encryption

Your server shares its public key with the browser, which the browser then uses to create and encrypt a pre-master key. This is called the key exchange.


 

Decryption

The server decrypts the pre-master key with its private key, establishing a secure, encrypted connection used for the duration of the session