Scratch Thoughts on tomorrows Cloud computing ecosystem

Reading Time: 1

// Designing tomorrows hetrogenous high performance computing environmenets to cater to the ubiquity of cloud computing to efficiently run the algorithms to increase the reach of services that will define how we compute tomorrow to the msases

I Believe this can be realised with the development and improvment of High level synthesys and Hetrogenous MPI Structures , with compilers coming to the call to determine which sections to Synthesise top ensure the adoptibility of this as a way of computation
// computer architecture research
// DEsigning computer systems with variable architecture isa`s to increase the scopoe of computing
// doing hte same with a mix of hls compilers and hetrogenous computing schedulers

This Clearly needs the investment of a significant amount of time . Will hopefully work it out soon.

Why the USPTO does not accept patent applications for Recursive compression algorithms :P

Reading Time: 2 minutes

TLDR:: I studied Information Theory now.

Recursive Compression Algorithms essentially make promises to recursively reduce the bits needed to represent the information content of a given source. It is in many ways like Perpetual Motion machines.

The insights gained from here really helped me appreciate the efficiency in communication we have achieved thanks to Shannon. The Second Law of Thermodynamics for reference.

The second law of thermodynamics states that the total entropy of an isolated system can never decrease over time. The total entropy can remain constant in ideal cases where the system is in a steady state (equilibrium), or is undergoing a reversible process. In all spontaneous processes,[1] the total entropy always increases and the process is irreversible. The increase in entropy accounts for the irreversibility of natural processes, and the asymmetry between future and past.[2]

I am in the middle of a research start That is kind of exploring a recursive compression algorithm – not sure as to how its going to work out  but it seems really interesting at its present stages .

what im kinda of doing is sort of taking a matrix finding keys that map it to elements in a predefined discionary that i can look up and recopute from.

my initial results map the nxn matrix down to a dictionary of all matricies that have occured so far , and 2Xn sized keys and a dict index identifier

this looks like i can bring down the size of 4×4 bit set down to 2*4 + 4 bits i.e. 12 bits ignoring the size of the initial dictioanry i.e. something around 75-80% compression on one pass.

This if done over multiple passes can recursively bring down the size of the overall file signficantly.

This is llossless –  the computation is small for a 4×4 quantity but gets cmiputationally bery expensive for a large 8×8  bit set which would be an ideal number given todays cpu arhitectures .  I am thinking about implementing this on an fpga and makeing an accelerator core that can help me

EDIT 1 :: MAthematically working the same out it looks like i need a dictionary of 2n+1 bits rendering the this more an encryption with hashable map of keys that rather expands the size of a file as each block can map to 2 elms of the dictionary – which im trying to figure as to how i can optimise .

EDIT 2 :: After a good investment of time :: the approach could be a option for a huffman like encoding scheme. A little too many projects at hand , may revisit this later if I get any leads .