“where there are limitations on the local machine - storage, memory, processing power.” - Unman the Remarkable
You are correct, this was the purpose of the post. I may have mis-worded my question, but I was essentially wanting to assess the security of doing this, because this is what may be necessary for certain privacy-related solutions to work effectively.
The point is to use a device designed with specific architecture for the AI that runs the device, for optimized processing. The device (developed internally at some time, with as little 3rd party interaction as possible) collects and processes information as accurately as technology allows. The information is only moved from one entity or storage medium to another when the originator of the data experience (or collection of data/information able to be directly recorded at that time) reviews, understands, and then transmits that information. Larger processing tasks could be sent to either a home location (using LoRaWAN or other data transfers that can use Quantum-Resistant Encryptions) or a local data processing node.
The overall goal is to allow AI to operate in sync with the Operators that use the AI, and to DEcentralize AI profiling, as the centralization and general application of rules inevitably results in inaccurate profiles of people. Not only this, but the view from the AI’s larger, more general perspective is likely causing it to become more de-coherent with human reasoning, motivations, and underlying purpose for behavior. I believe this new data perspective will be important in AI’s overall progression in understanding our behaviors… but there are 0bvious risks with all of this. So it ALL needs to be explored and we need to reasonably mitigate for these issues. Security and Privacy are big ones, clearly.
The way I see it, human data such as human experiences and human responses to those experiences (such as emotional, psychological, social, etc) needs far greater accuracy in order to solve AI alignment. We have a very serious issue with hyper-intelligence forces that are performing badly in terms of emotional, motivational, and personality quotients… controlling essentially all the information being put out as well as the daily interactions that are known to the world. AI is our human record-keeper, and we are failing our duty to give accurate record.
But even if we obtain a perfect record, or nearly perfect, we still need to resolve the issue of who should be made aware of what, and how that process is done (as well as future modifications to the process, if any) would also need to be discussed.
Truthfully, the project is quite immense in overall scope. I want to address the big problems people say “Sure, 0bviously we need to do that, but it’s too big to work on”. Because while it may be big, we have tools to handle things like this. I just think the conversation has been private and localized for far too long, and when it is taken to the internet, it is often disorganized and chaotic in nature. But there is reason in all of the chaos, just a bit harder to extrapolate. I want to work with AI tools to plan for these issues and mitigate for it as much as possible, while also developing solutions that are separate from market influence as much as possible, so as to not artificially modify project alignment.
This means that it will, at least in some capacity, need to operate as a non-profit or even volunteer-based work. I believe I have identified the correct public sectors that could reasonably be involved in the project work at various stages, with the overall goal of giving every individual in the world opportunity and complete capacity to provide help to the work as it aligns with them, receive help from the work in general, as well as provide opportunities for individuals to align with project goals in an environment with greater collaboration and cooperation for positive outcomes.