Friday, May 15, 2009

Is .NET (dot NET) the future of 3D applications and development?

What is .NET (dot NET)?


To explain, in detail, what Microsoft's dot NET is would go beyond this article and the spirit in which it was written. That said, the dot NET architecture and application framework is clearly the future of application and systems development on the Microsoft Windows platform, but at the same time very likely not exclusive to the Windows platform. The word "NET" indicates network or internet related technology but while that is part of the architecture and framework it is only a very small part of it. In a nutshell; dot NET is the future. 

What about Linux and .NET ?


The dot NET architecture and framework is by no means a domain exclusive to Microsoft. Project Mono, which was initiated by the well known Linux developer Ximian, have been working on an open-source implementation of the dot NET framework that already includes the CLR (Common Language Runtime), the C# (C-Sharp) compiler, and many compatible class libraries. This, combined with Microsoft's efforts in submitting C# to the standards committee and open-sourcing the CLI are certainly reasons to sit up and take notice.of what the future will bring. Cross platform development can be as much a painful and tedious experience as it can be a smooth and efficient one depending on the discipline within groups of people working on the development of a product, yet the prospects and current state of the above mentioned efforts are going to ensure that unstructured development processes can benefit from a smoother ride while offering an even bigger benefit for those who have applied a lot of effort in creating smooth multi-platform development pipelines within their respective companies.

FOR DEVELOPERS AND POWER USERS


While I would love to go in-depth on all the aspects and benefits of the dot NET framework there simply wouldn't be enough time, or web-space, to do it. I don't mind writing the occasional article or providing a lecture but writing an entire library would be a little overkill. Instead I will touch on the various aspects that I consider to be the most important improvements that yield the biggest benefits based on the scope of this article. 

Language agnostic development & scriptability


The In other words; being able to extend software by writing scripts that are not of a single type or proprietary language. Many 3D and DCC software products have had scripting languages for a good many years now and as a developer or power user you will have discovered that each of these is incompatible with known languages or differ from the standards and norm by using highly proprietary syntax and structure. The dot NET architecture would easily allow for a scripting interface to be entirely agnostic whereby the only prerequisite would be that the language supports the CLI. Currently those languages include C++, C#, Visual Basic, J++, J#, and many others including a funky flavor of COBOL. The choice of scripting language should be granted to the user, not dictated by the software developer. 

Easier said than done but with the dot NET architecture this becomes easier to do than to discuss. Being able to script in the language that is both a standard and is supported by many resources world wide will open up a whole new dimension to exactly what is possible when extending a software product through simple means. The possibility of having scripts and components interact and integrate seamlessly along with the enabling technologies it carries would, by no uncertain means, accelerate development of solutions as opposed to exponentially increasing effort in order to create a solution. 

Scripts or plugins, little difference


The native compiler core that generates the actual code that is run, is itself part of the dot NET framework and can be called upon by a software product. Scripts made by users can therefore be compiled into code that executes at a much faster pace than all of today's scripting languages in current products. Similarly, extending a software product in a language agnostic way means that scripts and plugins can blend together for the final result, regardless of the origins of the different components. The functional difference fades away while the practical negative implications simply cease to exist, thereby allowing advanced TD's and Power Users to write scripts or plugins that have no significant difference from one another anymore. 

JIT, Just In Time and it could not have come sooner


The JIT (Just In Time) compiler that is part of the dot NET framework and CLR provide just-in-time compilation on the fly. Code compiled into IL (Intermediate Language, the sister, or replacement if you will, of assembly code) is compiled in realtime just before it is needed by the system and executed at the best possible speed of the machine and OS it is running on. Instead of having compiler optimization defined for a final product or solution the JIT compiler core will always determine the best optimization and therefore guarantee the fastest possible execution based on each individual machine and configuration. 

Performance


While on the surface the dot NET architecture and framework appear to be a resource hog the practical results appear to be showing a very different picture. We have done extensive testing in our lab and have been architecting solutions based on the framework for some time now and most, if not all, negative impact is easily negated by a plethora of other advantages. While it would take a trilogy to describe all the advantages in detail the combined results are very positive in the areas of; code execution and speed, quick-to-market solutions, ease of architecting solutions, better control over available resources and engineering time, ability to replace components and improve continually in a consistent way, and most of all the ability to advance the development of critical components into the next generation of software. 

Delegate to relieve yourself from callbacks


The new framework, and in particular the abilities of the C# language, to handle delegates and events shows that a lot of developer frustration has been taken into account and done away with. If anything, the fluent means of adding and stacking callbacks in the system allows for the simple and effective creation of various event handling systems that serve their purpose well in the area of software that comes with a demanding and complex user interface and interaction, inter operation, and integration model. Doing away with the crude and error prone callback mechanisms in current C++ code is already proving to be more than a fix to a problem. Instead, the concept of delegates opens up a new world of possibilities that translates directly into engineering concepts to enable interoperability between components of a software product and third party extensions. 

Structured exception handling


Error handling, both inside a dot NET based software product and from third party add-on applications, happens in a far more structured manner in the shape of exception handling. Being able to write more robust code without the need to rethink every possible scenario that can cause errors helps a great deal in delivering a solution on time and with higher stability metrics than before. Our deeper exploration and tests have shown that the structured exception handling, which resembles the traditional try-and-catch structure, has absolutely no measurable negative impact on the execution and speed of an application. The only time a performance hit can be detected is when an exception actually takes place and gets handled. Then again, this is not an issue because the exception comes up and stops the normal execution of the application for a good reason. Chances are that in a traditionally architected application the program would have simply crashed. 

Garbage collection


Garbage collection in the dot NET architecture has proven to be robust, effective, and simple. We have used a well engineered project that contains an awful lot of manual memory management as the basis for our tests and the end result is that after replacing our manual memory management with the fully automated paradigm it became immediately noticeable that execution speed of the application was slightly faster. In addition to preventing memory leaks the intelligence of the dot NET GC defines when memory is freed back into the memory pool. While you can still have control over this it would require some outrageously complex situation in which you would want to have exact control over it. 

Most automated garbage collection systems have lacked either the performance or the intelligence of the dot NET GC so there is a valid reason to be satisfied with the available one in the dot NET architecture. In general terms I have seen worse memory management done manually. Memory leaks have always been a major source of frustration to both developers and users and to put that behind us is a step in the right direction. The additional benefit is directly related to the minimized effort required to do proper memory management that directly leads to fewer lines of code and subsequently fewer bugs. 

Inheritance up the wazoo


Inheritance within the dot NET Framework is certainly one of the more powerful features. The ability to subclass objects and override functionality with ease, as opposed to long term and complex planning and coding practices that were required in the past, allows for the development of highly modular systems which are adaptable not only during compile time but also at run time. Allowing other developers and users to selectively replace components should result in an equal benefit as it would within a structured development team. Exposing both the functionality, features, and the core without actually being forced to open the source access should pave the way of the future of commercial software development where extensibility and project managability play a key role. Wether you want users and developers to completely replace the user interface or grant them access to the detailed workings of core components should no longer matter with well designed dot NET applications. 

Multi-threading


Anyone who has previously, before the dot NET architecture, has done any multi-threaded programming will know that issues of synchronization and thread-safety are incredibly complex to handle and manage. And in addition to the conceptual and abstract design of multi-threaded software there is this huge amount of additional code associated with the implementation. The dot NET framework supports multi-threading with a number of well engineered management methodologies that in most cases evaluate to just a few lines of code required to implement proper multi-threading. Again, like with the garbage collector, this results in fewer lines of code and this also potentially fewer bugs.

FOR MANAGERS


If you have been managing teams of engineers for a number of years, whereby development takes place in structured languages such as C++, you will have no doubt noticed that the development of large groups of engineers in smaller teams that collaborate on a single complex product is often not as structured as it should, or could, have been. Implementing organizational changes to facilitate better development processes with the goal of writing more robust and structured code can be a lengthy undertaking that still does not guarantee the intended goals will be met by such changes. 

The re-introduction of structured programming


The architectural aspects of dot NET have (re-)introduced the concepts of structured programming, a concept that has been lost in the sea of API and OS madness that has been plaguing the software industry for the past years. With Microsoft Corp. being structured as many small companies working on a shared goal it has been evident that consistency across technologies and API's was lost and forced many to do quick and dirty hack jobs in order to get code done within reasonable time limits. OLE, Interop, ActiveX, you name it, but can you name the common consistencies between all those architectural aspects? With even Microsoft not being able to do that I doubt anyone else could. 

Component technology with few dependencies


The dot NET architecture provides old school structured programming disciplines along with modern and very future oriented -and scalable- concepts. It offers the ability to chain and tie down engineers within a structured programming paradigm while simultaneously allowing each individual engineers or team more freedom in their implementation efforts. This does not relieve management from its responsibility to guide and structure the development process but in general it makes the work a lot easier and more efficient in the intermediate to long run. Being able to identify and componentize the different parts of a software product brings with it a higher degree of reusability as well as a smooth path to retire and replace components without affecting a dozen different dependencies at the same time. 

Better reusability and exchangeability in turn result in a more cost effective way of developing software, which as we all know is already expensive enough of a resource to any technology company. Underlying architectural changes in the OS or API will have much less effect on application development, based on the dot NET framework, just as architectural changes in your application will have very little negative impact on third party components, regardless of weather said third party components are from outside or inside the company but from different teams. 

Versioning


Versioning control and managability aspects of the dot NET Framework should alleviate the tedious aspects of release management that has often, in the past, resulted in incompatibilities between libraries or distributed files. While it goes beyond the scope of the article to provide an outline of the versioning controls it is safe to conclude that versioning has reached the point where safe managability of releases should no longer be a time consuming factor. 

Testing and QA


With the structured exception handling, run-time attritubutes, and extensive debug and tracing features the approach to software quality metrics and day to day activities of testing and QA should expect to see an increase in productivity and pinpointing problem areas. Along with more detailed information from the code and software itself, at run-time, the further analysis and subsequent fixing of bugs can be expected to come at a lower cost and a faster pace. Combining run-time attributes with automated case testing should relieve QA activities from the more tedious and repetitive processes of hunting down regression bugs. 

SDK's and Extensibility


Applications based on the dot NET Architecture require less effort to expose their internal functionality, thereby resulting in an SDK that is more current with the application and far more within the conceptual realm of providing an open architecture. while documentation efforts are still required the, often separate, coding requirements that go into the development of an SDK should no longer form a limitation on resources.

FOR USERS


So what's in it for the user, what benefits would a user gain from software that fully exploits the functionality provided by the dot Net Architecture? The only possible answer to that is; a lot. I'll quickly go through a number of areas as they might apply to DCC applications. Please note that the following description of potential benefits are not a reflection of what could be possible under ideal circumstances, but rather real world practical descriptions of current options and capabilities. I will not get into specific features or feature sets because those would be too detailed to fully describe and are only of secondary concern given the fundamentals of what these feature sets could be based on. 

Integration


Tight, yet very flexible, integration between applications. Currently the integration and interoperability of software products depends on a lot of coding and coordination efforts on the part of the software vendor while third party applications and plug-ins are often not as integrated as users would like to see them. Integration is easy with dot NET based applications, especially since the software vendor would have to put an active effort into the development in order to avoid providing integration features. The ability to connect (dot NET based) painting tools directly with a 3D modeling system, while having both directly integrated into a compositing system are just a few of the simple possibilities that come to mind. Or how about dropping a 3D scene into your compositor and applying changes to the scene from within the compositor that could otherwise only be done in the 3D modeler (i.e. applications that operate on eachother's data can technically also share functionality). Of course the best part is that, unless a software vendor prevents integration, the different applications that form the integrated toolkit do not have to come from the same vendor. Software from different vendors can easily be developed to "play nice". In a small market where loyalties are divided and companies have been trying to satisfy the criteria of being able to provide complete and total solutions to your production pipeline it has not always worked as seamlessly as it should have. Partly these may be problems with commercial direction but more often than not the coding and coordination efforts between products have resulted in less than seamless integration. 

Robust software


Because of the advantages offered by dot NET to the software vendor in areas that help smooth and focus the development efforts in the areas where they are mostly needed (i.e. the functionality, performance, and stability) it has become easier to develop robust software, which in turn is a great advantage to the user. Any amount of time and effort that can be saved on the part of the software developer, while providing better bug tracking and fixing procedures, can be put to use in areas where the market and its users wish the effort to be put. The automated garbage collection already provides a way to prevent the memory leaks while structured exception handling provides developers with means to secure data in case of a serious bug. 

Extensible


Extending applications by means of scripts, plug-ins or add-on's is nothing new on the horizon these days and while integration often leaves a lot to be desired the approach of dot NET based applications would radically alter the way software is, and can be, extended. One major advantage is that with a solid design of the application there does not need to be a practical difference between extensions through the use of a scripting langauge or through the use of modular and componentized plugins that use an SDK. While there remain fundamental differences between the two, the practical aspects are virtually transparent to the user.

Most extensions to DCC applications take on the form of plugins, small programs that run on top of, or inside of, the host application in order to gain access to data and functions provided by it. External applications that are aimed to extend functionality often rely on importing and exporting data files between applications or require the use of specific COM/DCOM capabilities (if the host application provides any at all, that is). All of these external tools require extra effort on the part of the vendors and are hard to synchronize if different tools progress into new releases that often change the dependecies other tools are based on. With dot NET based applications such situations are no longer a serious concern. External applications can integrate seamlessly with a host application (though at that stage it is more a component than a host). This would free up a lot of double effort on all sides involved that can then be used to get a tool onto the market quicker or to fix problems faster. Because the component design concept is hierarchical it is not beyond the possibilities to have yet other tools that take advantage of third party components (which in similar terms would indicate add-on's for plug-ins, in lamens terms). 

Because of the structural aspects of the dot NET Framework and applications that are based on it, users can be provided with access to change and override specific or all of the functionality of the application. To satisfy the demanding proprietary development efforts that take place at many companies it will be hugely beneficial to have the opportunities to replace functionality by overriding it with custom and proprietary components. 

Performance gains


Currently the performance gains are hard to measure because there are gains in many areas just as there are performance decreases that can be expected from dot NET based applications. From an overall perspective the gains do appear to outweigh the potential negative performance impact in certain areas. While usability, stability, and enhancement can be guaranteed to progress at a steady but rapid pace it would allow performance bottlenecks to be analyzed in detail and improved upon or replaced at any stage in time. The current trend in the industry seems to be in favor of stable but unoptimized software now and optimization later instead of buggy, unstable, optimized tools. Based on the approach of a software vendor both these options are possible. 

Quick to market


Because of the developmental advantages to the vendor as well as third party developers and advanced users the software can be brought to market quicker. More robust software getting on the market faster to satisfy market demand while effectively spending less money and effort. It is almost the dream of every software vendor, I would hope. Wether the quick to market advantages and shorter development cycles will also bring with it lower prices to the end users is impossible to predict, nevertheless the option to do so is a fundamental aspect of dot NET based applications. Some software vendors might want to use the extra time to improve further instead of going to market quicker. While these are benefits for both customers and vendors there are additional benefits in the area of providing quick and safe fixes to problems that might reveal themselves after software has been released. Given the extensive and solid support for versioning it would be safe for a vendor to rapidly release fixes to users that wouldn't impact any dependencies such as other components that rely on them, provided the vendor decides to support such an open architecture.

Conclusion and personal views


My personal view, after having dealt with Microsoft Windows in every flavor and incarnation since its inception, is that Microsoft truly has been working hard on getting their act together by offering the software developers a solid blend of old school programming paradigms and methodologies within a framework based on modern and forward-thinking. Seeing the Linux community pick up on something originating at Microsoft, this quickly, strengthens the belief and realization that dot NET, as an architecture, will have a very strong future ahead of it, and all of us along with it. Without a doubt, DCC applications can benefit enormously when software companies transition towards and into this architecture and style of development. My appreciation of the dot NET architecture is also fueled by the fact that the concepts are very closely related to the style of development and activities of architecting solutions atmantiCORE Labs. You could say that I am biased, because of it. 

Previous software development methods, old fashioned C++ coding, and improper code hacking will remain in effect for a long time to come and will not vanish overnight, however it may be the case the fact remains that there now is a choice and transition path into the future that does not endanger the continuity of new software products like it has in the past. The choice is there to make and take and just how many will have the vision to actually take the next bold steps to embrace it remains to be seen.
more from here:

No comments: