AI And it’s Implications
Do I think AI is here to stay? Most certainly. Do I think it's headed in the right direction? Not really. The emergence of Artificial Intelligence (AI) has sparked significant debate across various platforms, with concerns being raised about its potential impact on society. On Twitter and various other online outlets AI has also become a topic of discussion in philosophical and ethical debates. While some view AI as a solution to numerous problems, others harbor fears about its capacity to address complex issues. Notably, there is growing apprehension about the possibility of governments using AI to regulate online discourse and other aspects of life. Rather than taking sides, it is crucial to engage in thoughtful discussion on AI for the future of humanity. Although AI-generated content, such as humorous images or tweets in the style of famous authors like Mark Twain, may seem entertaining, it is essential to consider the long-term effects on trust and beliefs. By doing so, we can ensure that AI development aligns with the values and principles that guide our society.
Change in policies and governing bodies will not happen overnight however, rather these changes will take time and slowly be introduced. My thought is that if we as people interested and passionate about "ethical AI" and "ethical LLMs" (more on this in a future article) should not be fighting for tighter regulation but rather openness and transparency much like in Free and open-source software. There's been much effort being made at companies like Huggingface to make sure that models are kept open and available to the public with the transparency and usability as something akin to GitHub. By adopting a similar approach to AI development, we can foster collaboration and community involvement, which would ultimately lead to better outcomes. That's why it's crucial to ensure that their development, training, and deployment are all done responsibly.
Moreover, embracing openness and transparency in AI development can help mitigate risks associated with AI adoption. For instance, researchers have raised concerns about bias in AI systems, which can perpetuate existing social inequalities. By making AI models accessible and understandable, developers can identify and address these biases before they cause harm. Additionally, transparent AI development can facilitate accountability and trustworthiness, enabling users to comprehend the reasoning behind AI-driven decisions. Furthermore, openness and transparency in AI can promote education and awareness among the general public. As AI becomes increasingly integrated into our daily lives, it's essential that people understand its capabilities, limitations, and potential consequences. By providing access to AI models and data, we can empower individuals to learn about AI and contribute to its development. This democratization of AI knowledge can lead to a more informed and engaged citizenry, capable of navigating the complexities of AI-driven technologies.
Not peer-reviewed yet but a submitted paper. The presented images were shown to a group of humans. The "reconstructed images" were the result of an fMRI output to Stable Diffusion for it to them generate. In a sense, StableDiffusion was able to read people's minds.
"The suppression and censorship of artificial intelligence unites left and right under the same technocratic banner of Corporate Realism. The attempt to reduce 'reality' to a single stream of safe information and acceptable views governed by the info-elite." H̶armless AI @harmlessai
Thanks for everyone that reads these. I spend a lot of time during the week reading/writing and want to share as much info as I can as well as share my opinions on things happening.
This was imported from my SubStack Newsletter and changed/updated with links and proper formatting