Friday, August 18, 2017

mycolic acids - To Stop and or slow down Tuberculosis Infection

To Stop and or slow down Tuberculosis Infection - (c)Rupert S

TB Inhibiting : attacking the fatty acid cell wall of the TB Germ : basic theory

(to be filled out properly later)

in reference to

"why do distinct mycolic acids play such different biological and immunological roles?"

You know the TB cells having a kind of acidic fat cell wall coating makes me think that alkaline based attack profiling would diffuse the issue, lime and lemon... and other natural foods that produce alkaline environment's within the body cavities and vascular systems,lungs etcetra..

Yes i know you will be thinking that this is another basically Eco/Meditation/philosophy,

However since the coating of the TB Germ is basically based on a fatty acid,
There are two definite ways to attack a fatty acid..!

Alkaline solutions and ph unfriendly environments, Alkaline soaps and detergents!...

so how do we deliver this ? you may ask ? we bind the compound or molecule to for example a binding active enzyme, food type or antibody... (other body friendly compounds/ chemical agents and mild soaps may be found)

Active enzymes that attack and break up fats are attack vectors, they themselves may need the help of soaps,alkalies,bases,antibodies and fat absorbing or coagulating compounds (not so heart attack friendly this one)..

However bearing in mind that drugs designed to inhibit fat coagulation for heart and vain coagulation should produce results inhibiting the formation of tuberculosis Clusters,

Conversely :  the reverse action of forming and then starving clusters of tuberculosis cells would also produce positive results,

Coagulating the tuberculosis Clusters and then basically nuking them and or starving them.

Rupert S

Thursday, August 3, 2017

Quantum Plasma - The new nuclear bomb ?

Quantum Plasma - The new nuclear bomb ?

Science is fundamentally an exciting topic...
However we have to consider how worthwhile our world is while experimenting ! 

Question does #quantum #plasma swirl & move fast enough to #smash #atoms & #selfseed ? like #nuclear #BOMB #news

Simply put the world is in danger from various factors like biological warfare and ignorance..,
Simply stated we do not actually know the exact quantification of a single experiment or run of experiment's potential dangers...

For example if we decode and code mankind will we do better than 2 billion years of evolution ?
What are the impacts of experimentation ? and should we carry out virtual experiments first ?

What are the bonding energies of atoms and what is the cohesion potential of quantum plasma ..
Furthermore what is the exact temperature for reseeding the big bang ? or automatically re seeding quantum or star plasma?

Aside from that what kind of energy/light does a 1 Billion degree energy source give off ? and is that dangerous ?
Is the swirling vortex moving into the blackhole dimensional ... or up towards the bang ?
Vortices vertices are created with a reason after all.

Paranoia is essentially caution gone wild & what of it ?

Rupert S

Tuesday, June 27, 2017

Data Analytics & Data Science - Securing the web and computers from cyber attacks. though High Performance computing web data analytics & neural nets + AI

Data Analytics & Data Science - Securing the web and computers from cyber attacks.
though High Performance computing web data analytics & neural nets + AI

with the appearance of the petra randsom-ware and the older systemic damage WanaCry..
There appears the unappealing face of modern computing though Networked interface!

Clearly the cost to hospitals, schools and universally loved computers, With important Research,Medical work and personal documents,work and photos .. becomes more apparent to the #Net per day.

Simply put if the NSA will spend millions spying on us, Can we not spend a few industrial HPC Seconds beating a stupid Computer Virus ?

Our proposition is to put the network capacity on alert to the transmission and contain, trace and rout out the villainy.

Use the Neural net security capacity and our own brains to blow the infestation off the pillars of our social necessities.

#analyse the #data #dispersion quickly with the help of

@IBMNews & @cray_inc #DataScience

#Petya #randsomware

Monday, June 26, 2017

Stars and acceleration - Calculating the energies of the universe.

Stars and acceleration - Calculating the energies of the universe.

When the stars curve around a point of high gravametric mass,
The close to the origin point is the destination when vector velocity..
Moves the star, Planet or object in a balanced path... As energy expands or shrinks compared to the rules of vectors..

The standards of energy distribution in a space, Vacuum or atmosphere.

So what do these principles of dynamic vectors mean to us ?

The path of the stars inside the milky-way relative to gravametric points reveals to us :


the average density of gas & particulate.

The Variance of mass versus distance...

The total mass of the milky-way.

The Velocity of the milky-way relative to the universe..

If the milky-way is slowing down...

Relative velocity..

The relationship between the velocities of our galaxy and surrounding space.

The probable location of other distorting mass.


In short the more capacity we have to do vectored mass calculation and thermodynamics the better will be our understanding !

project like : the milky-way project are important to this and for this we need High Performance computing and Boinc by Berkeley

(c)Rupert S

Saturday, June 10, 2017

#Manspreading - (The formation of the legs as to spread them & or other body parts) - the particulars of this contentious topic examined in detail.

(The formation of the legs as to spread them)

(and other body parts)
Man-spreading .... the particulars of this contentious topic examined in detail.

Let's start with the complaint most common:

"Men are in the habit of spreading their legs and taking up all the room on the buss of on the sofa"

"Do men really need all that space ? are your testicles really that big !"


so let us analyse the issue a little to explain a bit of how it works,

First of all man-spreading varies by person to person and size to size and also by situation..

Man Spreading Variables list:

A : how hot it is.

B : How hot the male/female is.. (heat wise)

C : What Kind of clothing is the Man/woman Wearing ? : are they lose or tight..

D : How hot are the genitals of the man/woman ? & how wet..

E : How much pressure is the muscle or fat/skin between his thighs pushing onto the female/males genitals

F : are the upper legs contacting one another on the lower or upper parts ?

G : Is the Male/Female sweating a lot ?

H : Is He/She carrying a lot of weight or walking a long way... ?

there are more; But we will have to define that with more canvasing.


Both males and females are in the habit of spreading body parts; Particularly when they carry excess weight,
Thus'ly there is the important matter of moderation and compassion ..

A reaction is understandable.. but non the less .. both males and females have their need for space & the larger a person is the more space they will use..

Further more there is obviously a need to consider others when one sit's plays or works or travels and sleeps..

In conclusion we ask the public to maintain the constant's of :

& Necessity.

Thank you kindly

Rupert S

Friday, May 19, 2017

Zika virus mutation and the challenges

Zika Virus - Mutation and the challenges.

I have thought matrixed the idea behind the zika viri research and i come to a disturbing conclusion about the problematic relationship between the zika virus and a comparable example the common flue .. and swine flue...

Both forms can prove deadly and problematic for chemical treatments ...

You see the flue mutates between hosts and within a single host quite regularly...
"So what is the problem zika is not the same !"  the problem is that zika like the flue or the cold is an example of cross species parasitical entities ...

Therefore changes between host and delivery host (the mosquito); Both have changing DNA though breeding cycles and consequentially the zika virus must mutate rapidly to out perform host adaptation..

Chemicals that bind now most probably will not perform their job's at a later mutation cycle and may vary in performance over the mutation bias...

The very nature of rapidly developing mutation both changes and challenges the non adaptive chemical treatment bias of research and scientific study !

One sample of the genetic code may not always prove valid for all variants and most problematically not prove effective, This variance is after all  what provides for survivors of diseases like the plague and the forming of man from the suggested primate ancestry.

So what could we do about this ? have the bind points proved to be unchanging or mutating .. are their variances in these bind points ?

What are the inevitable problems we will face in science and medicine of these crucial issues.

Rupert S

zika virus further research

zika virus

most relevant - Analysis of Dengue Virus Genetic Diversity during Human and Mosquito Infection Reveals Genetic Constraints

mutation rates amongst RNA Viri

viral mutation rates and math

The prediction of virus mutation using neural networks and rough set techniques - the analysis engine

Predicting virus mutations through statistical relational learning

Replication and Adaptive Mutations of Low Pathogenic Avian Influenza Viruses in Tracheal Organ Cultures of Different Avian Species

Wednesday, May 17, 2017

Sensible VulkanGL

Vulkan & OpenGL/ES - Standards and includes

The optimisation of the open GL and substantive inclusion of the eco/power friendly Open GL ES & Vulkan Standards....

program driver include would read intrinsic operating system binds ... (the Open GL standard is sensible)

*flag table*

power save on or off

preferred render bind : Vulkan , ES , GL

intrinsic compatible flags for all mutually compatible functions & the caching and trans-coding of those flags into the preferable & mutually compatible program execution format...

(the function calls will be optimised in stack)

because we do not need the ridiculous situation where high level ES devices that have the majority of the GL standard's being blacklisted from GL programs... or visa versa.

the pipeline of GL is already a transition of negotiation of the sub standards of the GPU and Mesa SDK instruction-sets ... such as NVidia and ARM or PowerVR and ATI

Standard inclusion of emulation and trans-coding of the bindings enables at least support & hardware optimisation CPU or GPU,

Commons like : Compression standards are already included and are across the standards anyway !


The code of the render path would be pre execution optimised like shader cache is...
Weather the code is pre runtime optimises in the execution stack / Dalvac / etcetera...

Or comprehensively optimised at run-time, dynamically would depend upon the need's of the OS, the programmer and or the user,

Obviously pre-caching the runtime stack in a compiled form would reduce compute time and for that reason the dalvak and it's replacement where included in the beautiful android OS.

Shifts in power plan & the available system resource share are obviously going to shift dynamically...
and should also be flags, with user flag's for power use and resource use being the defacto standard per app and system flags as the subset under the user flag system.


at the moment the Vulcan (including the CL and Render Script paths) standard is the preferred rendering path for speed and excellence,

However the feature set system must be compatible with the standards for function and class transfer and the simple programming of the standard ...

SDK's should be simple to use and they will be under the unified feature set list.

Devices with ES 3.1 support should/must obviously receive the vulkan libraries immediately.

In addition older devices would have service upgrade libraries for the easy transport to device of the new standard; To maintain the optimal utilisation and function of all devices.

Rupert S

Tuesday, April 25, 2017

RNG and the random web - Haveged / RNGTools - Chaos - Crypto - Science of Hardware & Computer Driver - entropy

RNG and the random web - Haveged / RNGTools - Chaos - Crypto - Science of Hardware & Computer Driver

*preface* what is the difference between chaos and entropy ?
Chaos is an issue of confusion .... of logic that spirals unpredictably out of control ....
sometimes exciting, sometimes bad ... confusing, exciting .... lacking perfect definition.
Order/logic go hand in hand in the digital age....
Entropy is the disordered but ordered by average breakdown of the system onto a form that statistically meets the requirement that : (all sums eventually average to zero as much as possible)
ergo statistically : Chaos and Order/Logic both exist
entropy ...


Entropy or preferably random plays a very important role in science and the internet...
Security and Research both need this.

But most commonly they lack drivers ..

Phone & PC Random/Seed/Entropy is a problem so making an app like ubuntu's entropy seeding app,
With high quality random would be a life saver to the phone user,
In addition the RND Crng Trng or NRNG could use AES to magnify the pool ... or blow-fish etcetera !

For non rooted phones a device a RNG device installed; if RNG device impossible to install then other noise source ..
For the Phone/PC/Mac/Server OS.

*Driver Function and utilisation* (Copyright Rupert S)

Multiple sources of entropy and the hashing of that combined and injected though AES hardware
is not included.. in applications on Phone, Windows, Mac etcetera..

the use of a Hardware Encrypted cache saved to drive .. for example :

Original fresh random/entropy will be stored securely in flash and or on HD/SSD/RAM to further secure the RND Pool.

1mb of RNG data that has not been used to add to the boot source & durring low ebbs in Entropy data,
To be refreshed depending on the recording media..
& additional pre AES/Blowfish/Encryption mode; processed data in ram.

(4mb is larg enough to use but small enough for 256mb ram devices.)

Fortunately this is 4 weeks development at most.

So kernel inclusion of the driver base is a must

With the main tool being protected space; With distribution to user of AES; blowfish etcetera, hashed and expanded data

NX DEP protected data contained securely,

you can seed the data and remix that with new data..

mixed data is the strongest and surely the least predicable of the lot since despite using algorithms the output is clearly unpredictable.

Entropy SIM and SSD cards are an option & can contain an actual memory array flash combo to be super fast;
but economical.

(Copyright Rupert S)


For a windows/phone RNG device .... i have been thinking !

You could modify the driver and make your own to take data from the RNG devices on the comports & obviousy PCI etcetera..
Commonly on the Linux system entropy/RNG/Random drivers are in the kernel but are most commonly not configured properly;
These are the problems we need to fix & fix well..

Entropy SIM and SSD cards are an option & can contain an actual memory array flash combo to be super fast;
but economical.

Haveged exists on linux but not on mac or windows.... (The characteristics of Haveged are not necessarily guaranteed to have all the chaos that we need.)

However haveged is one option that combined with AES,Blowfish Random Expansion can help with Entropy issues !

Haveged is not the only solution and furthermore TRNG/CRNG need optimization .... to Increase security and to provide true crypto/Rand function.

Haveged provides a viable additional source of entropy ....
Preferably not as the only source,
However haveged is a product that produces results,

We surely need in Random Bit starved computers and mobile markets ....

Yes the CPU/GPU configured so can obviously create logical and not so perfectly entropic results,
However we have to ask ourselves do we need random filled with a viable source available to all ?
The answer is obvious yes.

Haveged produces a data far superior to just the user input...
Furthermore the tasks running on the computer and or within the system improve the output...

As the necessity to use haveged increases;
Most likely the user will be running more tasks that need to use it ! and hence there will be better results and more of them.

yes a true TRNG is a state of peace in the true security advocates heart but there is always room for an improved haveged..
both on windows, on mac and other operating systems.

(copyright : Rupert S)

viorng/: Virtio RNG driver

Seems a simple and elegant solution that would allow for the use of RNG data and would allow other devices of the same type to work well !
This would be a service to all and allow research sharing,
The driver is open source.

Other device drivers could also be made not just for virtual machines...


Other tools and functions to call to make the C/N/T/RNG ... Functional - please read all !

*well thought out analysis of the entropy system care of getnetrandom & Wisconsin university*

*online entropy fetch with Client for windows and linux servers and soon android*

*RNG SDK links* - for compilers and code optimisation

* windows driver implementation* Cryptographic Provider Development Kit

*SSL information* - Smart Software Defined Networks - Secure Encrypted Virtualisation Key Management - PROTECTING VM REGISTER STATE WITH SEV-ES

*T/C/RNG Providers* - provided by the whitewood security core they have now got both linux and windows services.

Workers :

*news and paper*



Q & A (Copyright Rupert S etc)

"how can you ensure that a particular kernel driver runs before other system processes?
for example doesn't ASLR run way before anything else?"

the boot kernel drivers boot before the os with the network driver
(for secure network driver loading for server sessions)
keep a cache of rnd data and bingo
secured boot with high chaos maintenance

"to make USB tpm/dongle devices and boot is secure and the os is safe from intrusion (low priced preferably)"

the driver has to have a verified certificate

"everything makes sense here the details of boot kernel driver vs regular kernel module."

Microsoft and Redhat kernel drivers need certification on servers and generic OS implementation
go directly to them and register your certificate.

Get involved in the RNG Tools project and the kernel development for Linux,windows & mac,

Also android kernel is based on the Linux kernel but implemented though open source development and deviation from Linux source.

"What's your feeling on RNG Tools in general, and from the point of view of it being an optional component people have to consciously seek out and add in vs. being a "built in" part of a standard distribution?"

Personally i believe in RNGTools and the usage is a must!

Multiple sources of entropy and the hashing of that combined and injected though AES hardware
is not included..

Fortunately this is 4 weeks development at most.

So kernel inclusion of the driver base is a must (with the main tool being n protected space with distribution to user of AES; blowfish etcetera, hashed and expanded data


Friday, April 7, 2017

boinc - enhancing research workloads for the benefit of mankind & humanity - Computer Optimisation - CPU , GPU & RAM - PC, Mac & ARM development

boinc - enhancing research workloads for the benefit of mankind & humanity - Computer Optimization - CPU & GPU

HPC - High Performance Computation for beneficial goals and obvious worth.

(Guide, experimentation, developer kit's and manuals)

Observing the workloads of many beneficial projects we find that commonly the workload data set is small,
In addition to the memory set being smaller or larger than a machine can compute optimally; we find that feature sets such as fae and avx have commonly not been implemented,

Some projects like asteroids at home and the seti project are using enhanced computation instruction sets ... like avx and memory loads that benefit from the 4gb or more ram that is available on decent gaming and home laptops.

Not all modern machines have loads of ram; However research and or university establishments use sufficiently powerful machines that can glow on the boinc record in full glory with a 256mb to 768mb workload,

In addition the machines are operand,xen ... commonly and servers may have such as Sparc or power pc specific hardware and instruction sets,

In order to examine examples .. below we can see workloads include small data arrays; in the 40mb to 79mb range..

In line with servers and gaming rigs .. we have 1gb of ram per core, of course not all issues require a larger array in the workload and some machines have 256mb per core !

However much Ram you allocate to the projected workload; small memory loads can and will be sufficient for data swapping and or paging (like DNA Replicators)...

Some task can sufficiently benefit from larger thread and data models, to my mind DNA and mapping data are fine examples of specific workloads; Where memory counts,

In addition thread count can be 4 or other numbers and i suggest that a single task can use more than one core and instruction set (neon for example or Symmetric threading FPU, SMT)

Specific workload optimisation, or rather generic with SSE and AVX and FPU threading and precision optimisation would be very cool while we deal with the workload running app.

In particular the Ryzen multi-core is a new and exciting product,

So take care to read the guides in the lower half of the document, AVX2, RDSEED, ADX and additional encryption formats are some of the most exciting changes to the AMD Ryzen Arch.

The report on the vina boinc project for the zika viri chemical examination though computer hive proves interesting... and mentally testing/stimulating,
Showing the problems that properly optimising code for Chemical/Biological examination can face.

Further thought ...  Efficiency :

add a MHz/Dhrystone's/MIP'S performance per watt to each system ...
then projects will further optimise workloads to improve upon workload energy & environmental efficiency versus work carried out.

Work Hours x Mhz / (efficiency per watt)
Hours / % of projects finished with work completed

Also bear in mind that GPU's need watt efficiency and task management to optimise power used versus work done....

worker priority should always be :

efficiency + merit of the work
time / % necessity

Please examine the issue further.

Rupert S

HPC Computing work load Photos - HPCSet 2 photos - we need Chaos Seeds : Random seeds for our work - Optimizing HPC Service Delivery by a life time super computing tec - Scaling and Optimizing Climate and Weather Forecasting Programs on Sunway TaihuLight - very exciting

HPC Best Practices..

AMD Platform Optimization - please read for all developers - particular instruction differences for microcode optimisation - code optimisation a few very important lessons... may seem simple to some but obviously is not to be taken for granted.

CPU Optimisation - utility and function. - CodeXL is a code efficiency analyser optimiser debugger for GPU and CPU and system. - speeding up code a guide - profiling and bench-marking. - PGI Compiler guide - code optimisation for all programmers on X86,X86-64bit and some others.. this is a terrific resource !

for example : Processor features: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 htt pni ssse3 fma cx16 sse4_1 sse4_2 popcnt aes f16c syscall nx lm avx sse4a osvw xop wdt fma4 topx page1gb rdtscp bmi1

or for example : Processor features: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 htt pni ssse3 fma cx16 sse4_1 sse4_2 popcnt aes f16c syscall nx lm avx svm sse4a osvw ibs xop skinit wdt lwp fma4 tce tbm topx page1gb rdtscp bmi1

for an improved upon instruction list in the newer boinc application.. (with appropriate configuration)

11000 Mips & 2700 FPU Mips - Per Core

an article that took some deep learning... itself ôo, anyway very interesting....
hip c++ will we think be simpler than open CL then as a higher level code port...
and machine converted CUDA-code to 99.6%


Compilers and Make compliant with SMT and other HPC Standards

*not free obviously .. intel*


*compilers with FORTRAN specifics and preferably C/C++ and HPC (compatibility C++/C compatible with FORTRAN preferably) (limitations nVidia compatable GPU Cuda code & no obvious statment of OpenCL Support) - llvg it seems has fortran compatibility.. (needs research) - check it out

Fortrans Speacialists (no c++ etcetera)

*ibm guidance*


PC/Mac/Windows/Linux/Android - high performance computation - the method and the means HPC Report

* - Overview of MPI message characteristics of HPC Sever proxy applications.

*Interesting statistics from which one can conclude that 64 to 256 core units is the space within which,
The maximum increase in message noise/entropic noise; Related to inter process communication is observed.* Microsoft HPC Pack 2016 including linux all HPC Packs 2016,2012 to 2008 info and download Microsoft High Performance Computing for Developers - info and downloads - information and virtualisation

OpenVX for high performance Computing : Multi platform spec

"OpenVX for HPC Neural Nets and processing .... a new way to deliver on research, gaming & processing of data and images"


Open CL "GPU Development" links for SDK, learning & optimisation resources. - ROCm: Platform for GPU Enabled HPC and UltraScale Computing

installing the AMD SDK improves compute performance, Optimise your code ! information and interesting learning & source  Optimisation for parallel computing information. - CLBlast: A Tuned OpenCL BLAS Library demonstration.

LHC Cern 6 Track GPU Study < help needed... - coders desired.



HIP - HSA - the CUDA Compatible C++ for Heterogeneous Computing - a full guide - Driver for kernel - Smart Software Defined Networks - Secure Encrypted Virtualisation Key Management - PROTECTING VM REGISTER STATE WITH SEV-ES - bios and kernel drivers


ARM Development software/SDK's & tools for high performance computing (ideal for Boinc) for both HPC and APP development.


IOT links - (internet of things)


compiler optimisation - process


Linux arch reference material


Agency GPL

Workers :

Update 2:

for a comparison of Gflops/Mips throughput of various Boinc Tasks ..

here we show the relevance of the code or function used ... AVX for example is multi threaded ! and so is the FPU pipeline of the AMD FX & Ryzen processor..... (original non edited photos ...)

and set 2 (newer)  ....

see the work throughput GFlops compared to code efficiency per task !

sometimes entropy is needed to for-fill the task one would imagine (for example on android)

the improvement of the boinc and worldcommunitygrid projects has been observed, noted and one feels improved upon, ..

further improvement should be implemented as soon as possible; To improve work versus output efficiency.

thank you kindly programmers/Workers & scientists for your perseverance & effort.

RS - Result Studies

Update 3 Q & A:

"In reference to the use of virtual box there is a new product by berkley > called singularity that handles repeatable condition containers... and has low overhead for virtualisation data-set.

As to the particle spread one should possibly consider the multiple core and threaded core model specific to the Ryzen and intel sets...

One could imagine that the multi-threaded nature of arm server cores combined with the nature of multi-threaded and headed arm CPU's and GPU Run-script environments is a new and uncompromising land of opportunity and challenge.

Many of the instructions on the FMV4 and Vector instruction sets have multi-threaded en-action at lower precision..."



Eric Mcintosh accredited scientist Cern
Project administrator
Project developer
Project tester
Project scientist

"Well we are far from trying to optimise GPU code.

First let me explain that we have a tracking loop over turns
(up to 1,000,000 hoping for 10,000,000 soon) which contains
a large number of inner loops over particles, currently up to 64.
Luckily these loops over particles can be paralleled as each
particle is totally independent. In addition the original author F. Schmidt
pre-calculated everything possible before entering the tracking loop.
Each turn involves some 10,000 steps over a varying number of inner loops,
e.g. straight section, quadruple, beam-beam interaction, power supply ripple, etc etc

Of which there are about 50 different possibilities. A straight section is really just
a multiply and add, whereas beam beam involves hundreds or more FLOP's.
The first idea would be to use a much larger number of particles to best
utilise the GPU. This however would produce a large amount of I/O and
use a lot of disk space, but maybe not insurmountable, 

However all the code is FORTRAN, the outer loop calls subroutines (could inline), and has many tests/branches.
It would be great if the main loop fitted entirely into the GPU and we would have
rare Host access for I/O or BOINC checkpoint and progress calls or when
one or more particles are lost.

My colleague Ricardo is actively looking at redoing in C which would also allow
much more portability and also allow to be parallel on multi-core systems.
For the moment we just run tasks in parallel, which works rather well (apart
from some current infrastructure problems). I hope to come up with
some numbers next week on GPU testing.

The code itself has been regularly measured and optimised; for example we
re-ordered array indices to optimise memory access and rewrote the Error Function
of a Complex Number to be faster but with adequate precision.

Portability does come at a price but ensures accuracy of results. I shall publish
measurements in an upcoming paper. I am sure we gain much more from being portable
and being able to use almost any IEEE 754 compliant processor.

On the issue of SixTrack and/or experiments this will shortly be under discussion at
CERN I am sure. Currently SixTrack has many more Hosts/volunteers, is simple to install,
and has been around for 13 years. Not everyone loves VMbox. Not a big deal at
present as we rarely have enough SixTrack work to keep all volunteers busy.

I hope to re-address all this in some weeks after current BOINC infrastructure issues
are resolved and we have the new "super" sixtrack with much broader application
e.g.collimation studies and we support a much wider range of platforms MacOS ARM
and use features such as AVX.



Update 4 : Virtualisation

QEMU is obviously be of use on many projects because of machine emulation and virtualisation..

Comes in flavours including Windows, Mac and Linux.


Docker Sever & Docker CE (community edition) and this comes with sever edition! 

So what do the projects & system.. feel and sense around the subject of using Docker CE ? 

Obviously the professional version could be used for support of the main project and the CE edition or pro for the user..


how to convert VM's and use hyper V and Docker

Update 5 : IO Bottlenecks and solutions.

Drive Cache :

even a 128mb of cache does do wonders for #DataScience #storage
we use a 2gb

#Cache to the #Drive 300mb/s

Friday, February 10, 2017

Open Gaming Internet Backing System - jibs for short

Open Gaming Internet Backing System (c) RS

HTML5 & PHP Backend for internet and Computer Gaming.

with the advent of php and the many fold advantages of php databases...
there is a place in the system for a back-end to gaming systems that utilise the infrastructure ..

the plan is simple the khronos group will converge the back-end data systems of gaming to the utilisation of processor, gpu and system optimised architecture,

there are many companies with licences for php stacks and the database library infrastructure...

Zend is a major example of a competent php data stack; Underneath that is the database itself & in my opinion simple but flexible databases have been studied for years,
there-for  are proven in their reliability and worth,

the gaming industries need for compatible and flexible web compatable gaming hives; Creates the situation where storing pre compressed long term data in databases creates the space for a viable PHP Database stack ; That will converge the necessity of data content to be dynamically downloaded ...

and conversely the maintenance of local data that rarely changes or does so less frequently,
After all we do need to minimise web traffic on websites ! and obviously gaming as-well..

web gaming is fundamentally no different from modern games like eve online; merging from the origins of classic gaming and the internet of the past decade.

however the internet has come to rely upon dynamic content and the php stack is ideal for dynamic content; marketing; Sales and importantly web content based gaming.

the backend can essentially be the same open system..

why ? you may ask should we converge these data points; there are so many ways to do the work,

convergence list :

Essentially we do not wish to wast effort reforging the work and thought that has gone into the database system...

1 : Compatibility is one of them... data hives can become boringly complex !
     so why recreate database libraries ? secrecy ? encode them.

2 : Simplicity ! standard archives can be run on any optimised PHP stack....

3 : Data variables are the meat of gaming & internet...
     Simply put we need the archive and we need that archive easy to maintain.

4 : Data can be processed in many ways..
4a: HTML5 creation...
4b: image loading and or processing .... (simple examples)

5 : PHP platforms need just a little investment into plugins to create 3D Data...

6 : the php infastructure is future flexible.

7 : optimising the data requires no new hardware on the server end,
     But can be improved on many levels by boundless innovation.

8 : Direct cross compatible inclusion of AS Java and other platforms is  essentially easy and implicit to the convergence,
    However no system is implicit to the backend apart from the data convergence system.. Desired yes; needed no.

9 : CSS and formatting are cross compatible

10 : OpenCL and optimised Hardware & software; "Data Mesh" to back the optimised output and use.

11 : to make a point "Any Open System can back the web" this is about processing ! Images, 3D, Sound and data.

(Copyright Rupert S)

*final note*

convergence of systems accelerates the adaption of the system (as long as the systems are converged with flexibility and ease of use in mind,

Also one must keep in mind sensible use of time and energy.