Just found at fefe’s blog: http://support.apple.com/kb/HT4077
Security holes galore, and some really strange ones at that, I mean arbitrary code execution through spell checking ? Come on.. I think we can conclude that MacOs is not more secure than Windows, just not the main center of attack.
One big thing I recently found out ( I have to admit my AS3 experience is limited to helping out others debug some nasty problems with it, otherwise I don’t touch it with a 10 foot pole ) is that the Flash Player 10 garbage collector is like a government employee ….. lazy to the max.
Normally a garbage collector keeps a list of “root” objects and maintains a graph of dependent objects for them, in order to figure out what memory can be reclaimed when it is not accessible to user code anymore. Turns out when the Flash garbage collector sees a particularly large subgraph of objects that would take some time to actually analyze … it just says “oh well that’s a lot of work, I am not in the mood, I’ll just say it’s still used nobody will know …”. That’s right the Flash garbage collector stops analyzing larger graphs and just keeps them around, giving you huge memory leaks for free ! Now that’s a feature.
I recently had some discussions about .NET performance, JIT generated code performance in general, and the applicability of managed code platforms to “number crunching” domains, like scientific computing or game development. After some research I found that there are nearly no current benchmark results posted anywhere on up to date versions of different runtimes/platforms. So I decided to do my own investigation. For the pretty narrow domain of “number crunching” I choose the well known SciMark2 in its vanilla Java and Ansi C version, as well as it’s C# port that lives in the pnetmark project ( original version from Cornell university ). Please keep in mind that this is no general benchmark that should be used in “my runtime is faster than yours” contests, but a narrow view on some performance characteristics for the above mentioned problem domains.
The test machine was: Dual CPU i7 920 (8 physical, 16 virtual cores), 16 GB 1600 MHZ DDR3 Ram, Windows 7 Ultimate 64 bit. ( Note that scimark only uses one thread at the moment so it’s a single core benchmark ) The test contenders:
NET 4.0 RC1 Visual C 2010 Java 1.6.18 Mono 2.6.1 ( default JIT ) Mono 2.6.1 ( LLVM as JIT ) The binaries for each platform were built using maximum optimizations ( Link Time Code Generation, SSE2 and Maximize Speed for Visual C ) and subsequently run 10 times each, and the results averaged. Here are the results ( sorted from best to worst )
Visual C 2010 925 MFLOPS Mono 2.6.1 (LLVM) 602 MFLOPS Java 1.6.18 527 MFLOPS NET 4 RC1 525 MFLOPS Mono 2.6.1 420 MFLOPS Conclusion: Basically Java and .NET offer the same performance, Mono with LLVM seems to be the fastest managed environment and AOT compilers still beat JIT compilers by a big margin. Roughly switching from Native code to Managed code will cost you 45 % performance. If that is worth it for you depends on your needs of course. Again please note that normal application workloads where a lot of other stuff like allocations, branching etc occur will show less of a performance difference since the pure number crunching is offset by other overhead. In the future i would like to test some other JIT based environments, maybe even some bytecode interpreters, and also try out Intel C++ as well as give Mono’s full AOT mode with maximum optimizations a try. Java also requires an upgrade to 1.7 prelease as soon as a real “beta” arrives.
One idea I had for using pixel shaders in modern RIA technologies goes as follows. Since we do not have real 3D support for now in either of those technologies ( I do not count Software 3D engines like Papervision since they simply are not up to the task of game 3D graphics ), what can we do with shaders and 2D graphics beyond simple filters ? In an isometric tile engine which are quite popular for RIA game development, we could have for example a second sprite that encodes the relative distance to the camera ( aka depth or z-depth ) for each pixel in the tile. Add a third sprite layer that stores the per pixel normal of the tile, and you can reconstruct the world space pixel position in a post processing shader ( similar to what deferred renderers are doing in 3D ) as well as get the normal vector. And now you can solve the whole lighting equation in any way you want in 3D even though there are only 2D assets used. I have to play with the implementation details, but first results look really promising.
No, I am not playing the game, ( that should be left to way smarter people than me ) but fiddling around with Microsoft CHESS a product by their research group, that is going to be included in Visual Studio 2010. Right now it’s available in Beta form and it simply rocks. The basic premise is that a new Test Runner for the Microsoft Testing Framework will run your unit tests that make use of multiple threads a large number of times with different thread interleave patterns, and thus detecting many kinds of threading bugs in your code. After it has found a thread interleave sequence that will make your unit test fail, you can save this information and add it to the unit test itself, making your unit test reproduce the problem each time reliably. This is simply a godsend, and the magic does not stop there. While debugging the running unit test you can also break on any thread preemption which are very likely the spots in your code that actually contain the code that is the root cause of your bug. So give this awesome tool a try, and write even better multithreaded code.
subscribe via RSS