The Nexus and Terminal Design in Light on M-Core ================================================================================= Jul.05.2014ad What we have learned in general so far is that it is easy to loose sight. While we were programming ncurses to get our terminal somewhat willing to properly put-out text, we had to somewhat design an arbitrary M-Core. Well - whenever trying to not just abritrarily write one we start to loosing sight, thus, what now? Of course writing a proper M-Core for all in all that' what we wanted. Once having done so we should also have a rather easy time doing the rest - so the idea - once however done properly. That is not longer then anymore about fiddling around in the core unless expanding realistically ... ---------------------------------------------- this insight did cost 1 pot of weed! The point is that now we can understand into a new direction. Not that it were not understood previously - it is however now done in the middle of the progress where we are already writing a somewhat proper M-Core. We have thereby gathered concepts and began to follow them up in code. The hardest part thereby - we might say - is even done! It begins quite simple - usually. Once writing any program, so without the intention to go any near Crystals, just to simply write up a working environment of some sort, ncurses in our case (that is a terminal/commandline thing that allows to place the cursor somewhere on the screen, in relation to a proper resolution, instead of simply like in the commandline ... - then also colors and such), one does eventually expand into further files. OK - so - what we get is, potentially, a main class wherein we write up the way ncurses is to work for us. That way we get an understanding and while we might continue thereon we should rather use our understanding to put it to a more effective use. Eventually I came to use Crystals datatypes, or, datatypes I learned from writing Crystals, and so it is simply put about just going ahead and saying #include "crystals.h" to get that all done. So - now while following along the reason that presented itself to me I found a glitch in my reason. Why would I try to put it all into crystals.h if now the other things, like Textfields and corresponding classes, would maybe better go into something else - like textfield.h? All in all - want textfields, need crystals. Want Crystals, what for? * now I opt to either choose a cigarette or to smoke some more pot. I realize I have only little left and thus might elonguate the consumption by buffering it through cigarettes. I opt to do nothing # I found that my coffee was empty and hence got me a new one. Now I find myself asking the same question, and why do I waste so much time writing this? * Anyway ... we have certainly taken our look onto the M-Core previously - so - enough. Hence I always came to a point where I didn't know what to make of it as too many other various things were still on mind. The problem always used to be the proper implementation as adaptive to whatever framework (ie OpenGL vs. DirectX) is being used. Logically that isn't much of a problem to do; And so I see and understand the value of just taking the M-Core as class and implementing it within whatever program infrastructure is being found to work with it in. So we say: M_Core.create_memory and M_Core.load_my_process - while all is to however work for Crystals ... them processes ... so - ... anyway. ==================================== * so I picked a cigarette * ==================================== So we take that class - and why? There may be reasons - like being more plattform independent after wiring the M-Core into a dedicated one - so the M-Core does primarily need concise and definite Functions to accomplish various things. For now we put into it just the mendatory, that is input and output, and ... voila ... tadaaaa ... we get 'Qoph' - as compound of classes that support the variety of things we need for certain tasks which in their place are derived off of Crystals data-structures that make up the base component of the M-Core. That means: Qoph-input alias Input Adapter Qoph-output alias Graphics Adapter At first thing what we want our M-Core to do after M_CORE::InitializeStartup is that it has all the screens available accessible and ready to use. As we move on we might also want multi-core support ... but that is another thing laying in the past and yet in the future to conceive. Whatever the case - we so determine the Screen class, one that for now is internal to the M-Core and as of that we can use DirectX or OpenGL or whatever makes us happy for that. We may read M_Core.system.screens to see how many - then we say M_Core.graphics.setup_Screen (&Screen_whatever,0) from where on we may want to use the screen class to see what resolution it is capable of or things alike. So - in this idea there are always two screen types, public and private. If it however is just 'screen' we may be able to use it as wherein we work on the M-Core, if I have a program that is to run within the M-Core, a linked in library that may have been compiled using a different M-Core, that doesn't work anymore! Thus all such 'Crystals' need to be compiled under the same conditions, that is, following a certain Architecture like 32 or 64 bit of course, - which is the base of the crystal-m_core interface, so, something that should push us to think more specifically about where to use byte and where to use long aligned datatypes. What we/I have worked out so far is a general understanding of how those datatypes are to come together, how to communicate between private and public 'frames', but a little bit is still missing - like - a properly erected M-Core. At this point the idea is that we have 'Crystals Main' as class that manages anything related to Crystals, while thereby also providing the META class as access to that which then uses the M-Core. The Meta is thereby to be provided with various things - and now of course it looks as though this were not necessary. ---------------------------------------------- this insight did cost a bunch of time! Still, the M-Core exists in its design so that the Meta or whatever is finally capable of handing itself over to M wherein the rest then is being taken care of. That is easy when simply writing a Game or something because M is thereby not important since it is the t-Core that does the actual 'crystal execution' ... but certainly another complicated thing on its own. While the M-Core is like a Box wherein things may happen, it is also the outer hull to the VR that happens - and hence it will again need something surrounding it - so the philosophical understanding. Now - what can weed help me now? I can take this all and start producing a concept while so doing the necessary things regarding the M-Core. I mean - if I wanted to say "what if I wouldn't smoke pot now" - I would have to not smoke pot and thus see that I would look, at first directive, into working from what I have worked out here so far. I would take what I have and make of it what I am capable of. Thus I have to specifically not do so - just to then find out that I still might if I smoked some weed. Fact is that when I smoke weed I at least expect the one insight that I otherwise wouldn't have had - kindof. * smoking pot * | | | :> And that's how it ended. We still got unfinished business to do. For instance so there is the primary issue with them frames. A general class allows us to describe portions of screen at any level - so, offset and size. In the end any process will acquire a frame like that, so, as general window into the M-Frame sotospeak. But my mind keeps drifting away. So - I'm about to calm down again already - somehow - it's hard to tell really when being high for a long period of time, like after multiple days. So, there is addiction. Like where I have had to earlier think about which way - I smoked a cigarette, possibly after enough time ... so ... high,low,high,low isn't as good and well as ... 'constant ascension' ... which is one reason - next to possibly just being addicted - to just put in all the remaining weed, at least of one bag, to get going. That's not much - three pots maybe per bag. So - when seeing how I previously reflected upon them inner things, the details, the point is that each element ties into a system that again is component of a system whereby there need to be straight lines for once to keep a certain order but beyond that things ... shouldn't be restrictive. Once sober, mind of course is capable of working with the worked out facts and figures - from perspective of the point of their conception however these figures are 'numb' - that becomes part of experience once working on them while running low. Mind has to then of course differently approach the matter - so, with more cognitive effort. Eventually, so it might become clear, this effort however runs into problems once there are issues that need fixing but are somewhere a part of this system that mind is no longer entirely in touch with. But being high ... the amount of dope determines a time-window, the dosage ... to some part as well. So I question whehter I should end it, keeping the rest for some dire days, or to smoke the rest right now. *#1 Firstly a Frame like that does maybe specify a region, but how that region is being accessed now depends on something else. A primary difference would be between textmode and pixelmode or then again projected 3D or otherwise. So it is evident that right with the Frame we need Adapters to work with them. An Adapter firstly adapts to a frame, so our DirectX Adapter wouldn't need anything else, basically, but to finally conform to the Frame standard. But this has now so far been the problem. While producing the Adapters or earlier called Mainframes it has always become a little bit intransparent as to how which where goes. Thus it was my intention for here to more objectively analyze how to avoid any confusion about the M-Core whatsoever - that while reflecting upon it from the perspective of producing the desired Terminal to which there is the Nexus which is integrated part of the M-Core operating System. One thing that so finally happens to be conclusive is that we for our production purposes should primarily focus on separating the Adapter from 'the System', instead of 'the System' from 'the System/the Architecture'; So that we work on the M-Core as Hull to the System wherein the Adapter resides, instead of "Amalgamating" the Adapter into it as though it were a fixed (duh) component thereof. If we so take DirectX, or any other thing, we find that it has a common thing to the root, being, that there is a centralized system of which our Adapter Class then is derived. So, as the Adapter focusses around Frames, we so come to have DirectX Surfaces that need to somehow be created and all that. We thereby don't necessarily want to create surfaces for each and every little thing - but all that connects the Adapter to the process so far is the Frame. Or so, we only have the Frame ... we have a "bunch of" Adapter-Mass and therefrom the first thing we create is the Frame. To not get anything confused we also let the screen be such a Frame, or otherwise, Fullscreen is a Fullscreen Frame into the full-screen adapter thing. Whereby we can understand Frames in two ways: As dedicated memory or as 'window' into a larger one. Thus what we know from Direct3D or OpenGL as Viewports is simply a Frame that draws to the screen. We can have "Surfaces" as screen-surfaces or surfaces as buffered memory. The idea is that screen-memory is tied to the immediate output controller on the hardware whereby the accessed memory field is updated, whereas buffered memory is all memory attached to that system as integral part thereof. That System furthermore is central to all Graphics Adapters, thus we may call it the Graphics Core - and how does that now relate to qoph-output? *#2 We can say that we won't use OpenGL and DirectX next to another, or also that for the start we may be best of with SDL. When however now combining NCurses and SDL - for whatever nonsensical purpose - we again come to think of the word 'Mainframe'. When we speak of putting graphics to use, we might begin speaking of the Hardware as before, the amount of screens, but actually we also speak of finally acquiring access to a screen because in whatever way we do in all wish to output anything onto anywhere - it cannot do so without that required access. So, on initializing ncurses we do get that however right away in form of stdscr. That this screen now opens up within a terminal that may now either be fullscreen or not doesn't matter - here the NCurses Mainframe determines that this is all. When starting SDL however we again are confronted with choices. What size, color coding, fullscreen or window? And if window or fullscreen we may also want to change that. So the expanded adaptation generates a new kind of adapter layout - while in adaptation to ncurses we formed our textframe, we in adaptation to sdl/directx form the for once generally required rgb and rgba frames but now also something else ... which is well but we need to understand how that is to take shape. First to each Frame there is a Mainframe. Each mainframe is available in the Graphics Core - good for now. * cigarette * The question though is, what to do with the Mainframe? DirectX_Mainframe.Start () does us what? Well, the M-Core is composed of elements chosen as part of the 'Meta-Setup', that is to which the correspondent 'Meta-Code' exists. So if my Meta only holds textmode, I only need NCurses. Otherwise, more or something else. Thus I can also tell within the Meta that there is this - I don't need to give it an ambigous name. Thus within the meta I understand what there is and | loop closed. The Meta now is capable of deriving Frames therefrom, and according to Terminal awareness there is direct output and indirect output finally. Understanding how the Mainframe works internally, the Mainframe can be used to accomplish certain tasks. Primarily we would for instance create Frames to compose smaller Frames, thus allowing us to draw a 'Window Outline' around it, within given space - if not generally just drawing them around them. * cigarette-done * Within ncurses we can always draw from screen, that means, every given frame only needs to be given with offset and size and we can properly relate to it. Within DX or SDL we however know a variety of modes or requirements. #*3 ... damnit, that little bit I got left ... I might as well ... #*3 So, now it is up to our Meta to be given time to execute - and that "Frame" of time or execution is now what we primarily look at. This is where it all happens. We thereby feature, in the idea - while practically also trying to more efficiently adapt to it by using functions - a loop with multiple stages. In the center of it all there is crystal-mode execution. This mode is exclusively featured as T_MODE, wherein the final scruffs are being removed that would somehow impair the performance of the Loop. In terms of a Desktop, the outer loop redraws the entire screen if necessary and performs a minor amount of operative tasks. There-after the foreground process, first of all, is given a frame to screen. So we have that term - 'SCREEN_FRAME'. This may then however already be crystal-mode execution - while for in order to what does, may or may not happen, the Meta Design is important. Once given the time to execute, what should it do? What does it need to do? ===# END of DOCUMENT