Friday, January 23, 2015

SalentOS - easy on hardware and the eyes


"The first operating system completely inspired by the Salento, land molded by the sea and wind, steeped in history and art in the south of the Italian peninsula. SalentOS is available for 32 bit machines with PAE and standard kernel (same ISO), in two version with two different Desktops." -http://salentos.it/

*They also have 64 bit versions, and even lighter versions available in the download page.

I first found SalentOS thanks to Distrowatch's "Based on Ubuntu list".

I was looking for something very light on it's feet. I wanted a very slim, gentle on resources, yet Ubuntu powered OS.

I found these qualities in SalentOS.

What's remarkable about SalentOS is, that even though it is very light resource use wise, they still manage to create a very beautiful to look at desktop.

"SalentOS is a complete open source project, free from commercial constraints, carried out with passion and dedication." -http://salentos.it/
Here's a video that gives you a glimpse into this unique Linux distro: (It's in Italian)
http://youtu.be/AcLdAHfS2Gw?list=UUGiy3Ka9wFFu-jMbMw1xe_w

And here is a screenshot of the OS's desktop configuration:


To download SalentOS, go here: http://salentos.it/downloads.html

Thursday, January 15, 2015

TheSpark.com lives again!


After reading an old internet article about websites that have come and gone, I decided to check them to see what, if anything happened to their now defunct and lifeless corpses.

Most pages either simply did not exist or were epitaphs of a website once thriving with internet citizens.

At first I thought this was the same fate for The Spark. It looks like a badly made website from the late 80's early 90's.

Then I saw a date on one of the posts... this was written less than a week ago?

Then it dawns on you the brilliance of that. It is supposed to look like Max Headroom exploded on your screen. (And if you don't know who Max Headroom is... never mind.)

Nostalgia will definitely bring people back.

So as usual I check the about page to see what they had to say about their new found re-submergence.

Here is what it says:

"TheSpark.com was founded in 1999 with the tagline “Internet like burning.”

If you’re kind of old, you may remember The Death Test and The Jerk Test and The Purity Test. You may remember The Date My Sister Project and the internet trailblazer who covered himself in stinky meat and wrote about it. Is any of this sounding familiar? No? Fine. Go sit in a hole.

The Spark has been on hiatus for nearly ten years.

Now it is back." -http://thespark.com/about/

TheSpark.com always had very strange, but very interesting articles. These are hilarious. The "sexy cement block" article really sticks it to the fashion and garment making industries. Funny stuff.

I always did love their quirky sense of humor. Like their advertising page:

"Want to advertise on The Spark? You should. This is a legitimate operation. Everything in our office is beige. That’s how legitimate." -http://thespark.com/advertising/

After reading that all the editors are women on the Masthead and Contributing page:

"Emily Winter Janet Manley Molly Schoemann Lauren Passell

This is not a feminist lifestyle blog. We all just happen to be women. It’s going to be okay." -http://thespark.com/masthead-and-contributing/

It's like they all just set a time machine for one decade and hit start. (also turns pop tarts into a black stain on the rotating plate in the microwave, pretty cool, no?)

I emailed the editors of TheSpark.com about this strange phenomenon.

I got a reply stating that while the site was a little underfunded advertising wise, it had a promising writing and editorial board. Here is the email I sent and the wonderful response I received:

Dennis Gutowski
Jan 4

to thesparkeditors:

Woah, thespark.com lives again, having de-cloaked off my starboard bow.

I would love to know how this all came about.

I am glad to see thespark live again.

Funny new content!

-Denny

-----

TheSparkEditors
Jan 5

to me:

Thanks, Denny!

SparkNotes.com, the website I work for, was born out of TheSpark.com. We're now owned by Barnes & Noble.com.

We were going to lose the URL due to inactivity unless we restarted the site.

Our editorial team includes several comedic writers and standup comics, so we decided to restart the site—albeit with nearly zero promotions budget—and see where it leads.
I'm glad you like it, and would be happy to answer more questions!

Emily Winter
Editor and Other Things
The Spark

-----

I emailed the editors back with some questions, and hope to add those as a follow up. So stay tuned for (hopeful) further announcements!

In either case, if you need a good laugh or want to stroll down memory lane, head on over to the re birthed TheSpark.com.

Sunday, January 4, 2015

SCIgen, a random research paper maker


SCIgen - An Automatic CS Paper Generator

"SCIgen is a program that generates random Computer Science research papers, including graphs, figures, and citations. It uses a hand-written context-free grammar to form all elements of the papers. Our aim here is to maximize amusement, rather than coherence." -http://pdos.csail.mit.edu/scigen/

Ever had a paper to write, but felt too lazy to write it? Well just sit right back, push a button, and wallah, a "realistic feel" paper with:

* References

* Related work references

* A well though out Table of Contents

* A clear and excellent introduction

* No "Lorem Ipsum" filler, it seems real unless you dig into it, then you realize it's not a real paper.

*Charts, pie graphs, bar graphs, plotted data

If I where a busy professional that just skims over these papers, I would totally belief it is truthful.

"One useful purpose for such a program is to auto-generate submissions to conferences that you suspect might have very low submission standards. A prime example, which you may recognize from spam in your inbox, is SCI/IIIS and its dozens of co-located conferences (check out the very broad conference description on the WMSCI 2005 website). There's also a list of known bogus conferences. Using SCIgen to generate submissions for conferences like this gives us pleasure to no end. In fact, one of our papers was accepted to SCI 2005! See Examples for more details.

We went to WMSCI 2005. Check out the talks and video. You can find more details in our blog." -http://pdos.csail.mit.edu/scigen/

SCIgen has fooled a lot of people and institutions. It boggles the mind just how well this software works...

Here are some examples from http://pdos.csail.mit.edu/scigen/ about the successes and failures this script has had in getting it's papers published... That's right, a computer program spitting out nonsense has HAD success in publishing papers to scientific journals!

"Examples

Here are two papers we submitted to WMSCI 2005:
Rooter: A Methodology for the Typical Unification of Access Points and Redundancy (PS, PDF)
Jeremy Stribling, Daniel Aguayo and Maxwell Krohn

This paper was accepted as a "non-reviewed" paper!

Anthony Liekens sent an inquiry to WMSCI about this situation, and received this response, with an amazing letter attached." -http://pdos.csail.mit.edu/scigen/

If this hasn't bowled you over by now, then check out this interesting paper I apparently released. ;)

Relational, Adaptive Modalities for E-Commerce

Dennis Andrew Gutowski Jr

Abstract

Unified interposable modalities have led to many essential advances, including interrupts and agents. In fact, few leading analysts would disagree with the evaluation of XML, which embodies the important principles of electrical engineering [11]. Fleck, our new framework for classical configurations, is the solution to all of these grand challenges.

Table of Contents

1  Introduction


Recent advances in multimodal modalities and cacheable theory have paved the way for scatter/gather I/O. in this position paper, we disconfirm the evaluation of wide-area networks. The notion that security experts cooperate with virtual theory is generally well-received. However, semaphores alone cannot fulfill the need for multi-processors.

We describe new concurrent archetypes (Fleck), arguing that DHTs can be made client-server, robust, and reliable. Nevertheless, the visualization of extreme programming might not be the panacea that statisticians expected. Next, indeed, Lamport clocks and object-oriented languages have a long history of cooperating in this manner. Therefore, we see no reason not to use the development of red-black trees to study the construction of write-back caches.

This work presents three advances above prior work. To begin with, we validate not only that consistent hashing and scatter/gather I/O are always incompatible, but that the same is true for forward-error correction. Second, we present a novel algorithm for the refinement of I/O automata (Fleck), which we use to confirm that forward-error correction can be made amphibious, peer-to-peer, and collaborative. We concentrate our efforts on demonstrating that cache coherence can be made linear-time, ubiquitous, and wearable.

The rest of this paper is organized as follows. For starters, we motivate the need for the UNIVAC computer. Along these same lines, to answer this grand challenge, we propose an empathic tool for constructing hierarchical databases [11] (Fleck), which we use to show that DHTs can be made certifiable, empathic, and signed. Along these same lines, we prove the analysis of Smalltalk. In the end, we conclude.

2  Architecture


In this section, we explore a design for simulating virtual methodologies. Fleck does not require such an extensive prevention to run correctly, but it doesn't hurt. Furthermore, Figure 1 depicts the relationship between Fleck and the deployment of B-trees. See our existing technical report [3] for details.


dia0.png
Figure 1: An architectural layout diagramming the relationship between our system and event-driven technology.

Rather than synthesizing the deployment of information retrieval systems, Fleck chooses to manage the exploration of multicast methodologies. We assume that each component of Fleck prevents checksums, independent of all other components. Though such a claim is always a confusing intent, it fell in line with our expectations. Any typical emulation of the development of cache coherence that would allow for further study into voice-over-IP will clearly require that the foremost heterogeneous algorithm for the investigation of web browsers by C. Maruyama is in Co-NP; our algorithm is no different. We instrumented a day-long trace demonstrating that our model is feasible. We show our framework's heterogeneous study in Figure 1.

Reality aside, we would like to investigate a methodology for how our solution might behave in theory. Next, Figure 1 depicts the methodology used by our methodology. We use our previously analyzed results as a basis for all of these assumptions. Although cyberinformaticians rarely hypothesize the exact opposite, Fleck depends on this property for correct behavior.

3  Implementation


Our implementation of our system is "smart", metamorphic, and extensible. End-users have complete control over the hacked operating system, which of course is necessary so that the location-identity split and Web services can connect to fulfill this ambition. Fleck is composed of a codebase of 83 B files, a server daemon, and a server daemon. We omit these results for now. We have not yet implemented the centralized logging facility, as this is the least unproven component of Fleck. Our solution is composed of a hacked operating system, a client-side library, and a codebase of 90 Lisp files.

4  Results


We now discuss our performance analysis. Our overall evaluation approach seeks to prove three hypotheses: (1) that object-oriented languages have actually shown improved 10th-percentile energy over time; (2) that instruction rate is an outmoded way to measure expected throughput; and finally (3) that mean block size stayed constant across successive generations of Macintosh SEs. Our evaluation strives to make these points clear.

4.1  Hardware and Software Configuration



figure0.png
Figure 2: The expected bandwidth of our system, as a function of instruction rate.

Our detailed evaluation methodology necessary many hardware modifications. We instrumented a packet-level deployment on UC Berkeley's desktop machines to quantify extremely efficient technology's effect on P. Zhao's deployment of lambda calculus in 1999. had we deployed our desktop machines, as opposed to simulating it in courseware, we would have seen improved results. Primarily, we added 200GB/s of Ethernet access to the KGB's network to examine the NV-RAM space of our network. Further, we added more USB key space to our decommissioned Macintosh SEs to better understand archetypes. The ROM described here explain our unique results. Next, we removed 3MB/s of Wi-Fi throughput from our mobile telephones to discover information. Along these same lines, French system administrators doubled the ROM space of our Planetlab cluster.


figure1.png
Figure 3: These results were obtained by Miller et al. [4]; we reproduce them here for clarity.

Fleck runs on hacked standard software. Our experiments soon proved that making autonomous our random Knesis keyboards was more effective than autogenerating them, as previous work suggested. We implemented our Smalltalk server in Python, augmented with provably exhaustive extensions. All of these techniques are of interesting historical significance; Dennis Ritchie and Venugopalan Ramasubramanian investigated an entirely different setup in 1953.

4.2  Dogfooding Fleck


Our hardware and software modficiations exhibit that deploying our algorithm is one thing, but emulating it in courseware is a completely different story. Seizing upon this approximate configuration, we ran four novel experiments: (1) we ran red-black trees on 33 nodes spread throughout the 1000-node network, and compared them against hierarchical databases running locally; (2) we measured NV-RAM throughput as a function of flash-memory space on an Apple Newton; (3) we ran 32 trials with a simulated database workload, and compared results to our middleware simulation; and (4) we measured database and Web server throughput on our ubiquitous testbed.

Now for the climactic analysis of experiments (1) and (3) enumerated above. Note the heavy tail on the CDF in Figure 2, exhibiting duplicated expected energy. The data in Figure 3, in particular, proves that four years of hard work were wasted on this project [4]. Operator error alone cannot account for these results.

Shown in Figure 2, the second half of our experiments call attention to Fleck's 10th-percentile clock speed. We scarcely anticipated how accurate our results were in this phase of the evaluation. Next, the data in Figure 3, in particular, proves that four years of hard work were wasted on this project. Next, the results come from only 4 trial runs, and were not reproducible.

Lastly, we discuss the second half of our experiments. Note that Figure 3 shows the 10th-percentile and not effective partitioned block size. We scarcely anticipated how accurate our results were in this phase of the evaluation approach. Of course, all sensitive data was anonymized during our courseware deployment.

5  Related Work


In designing Fleck, we drew on existing work from a number of distinct areas. Gupta et al. developed a similar application, contrarily we argued that Fleck is Turing complete [13]. Anderson et al. explored several ambimorphic methods, and reported that they have tremendous inability to effect RAID. even though this work was published before ours, we came up with the method first but could not publish it until now due to red tape. Therefore, the class of heuristics enabled by Fleck is fundamentally different from prior approaches [6,10].

The simulation of extensible epistemologies has been widely studied [7]. Bhabha developed a similar framework, on the other hand we showed that Fleck runs in O(n2) time [9]. We believe there is room for both schools of thought within the field of pipelined software engineering. Continuing with this rationale, Fleck is broadly related to work in the field of stochastic electrical engineering by Zhao et al., but we view it from a new perspective: the evaluation of the Internet [8]. Fleck also prevents multicast algorithms, but without all the unnecssary complexity. Therefore, despite substantial work in this area, our solution is clearly the application of choice among information theorists [11]. We believe there is room for both schools of thought within the field of e-voting technology.

While we know of no other studies on online algorithms, several efforts have been made to measure forward-error correction [12]. Obviously, comparisons to this work are astute. On a similar note, the original solution to this quagmire [14] was bad; nevertheless, such a hypothesis did not completely solve this issue. On a similar note, Thompson [1] developed a similar method, contrarily we disproved that Fleck follows a Zipf-like distribution. It remains to be seen how valuable this research is to the robotics community. Thusly, despite substantial work in this area, our method is apparently the approach of choice among mathematicians [2].

6  Conclusion


Fleck will address many of the issues faced by today's cyberneticists. On a similar note, to address this challenge for classical algorithms, we explored an analysis of telephony. We demonstrated that even though the foremost highly-available algorithm for the investigation of the memory bus by Rodney Brooks [5] is maximally efficient, e-commerce can be made "fuzzy", wireless, and client-server. The evaluation of thin clients is more structured than ever, and our framework helps end-users do just that.

References

[1]
Agarwal, R., and Knuth, D. Bayesian epistemologies for kernels. Journal of Cacheable, Certifiable Theory 10 (Aug. 2000), 75-99.
[2]
Dijkstra, E., Hamming, R., Simon, H., Watanabe, a., and Ito, B. A case for Scheme. In Proceedings of the Workshop on Game-Theoretic Technology (Sept. 2005).
[3]
Gupta, a., and Yao, A. A case for local-area networks. In Proceedings of the Workshop on Scalable Theory (Jan. 2004).
[4]
Jr, D. A. G. Decoupling Boolean logic from information retrieval systems in the lookaside buffer. In Proceedings of INFOCOM (Nov. 1998).
[5]
Jr, D. A. G., and Bose, F. Evaluation of neural networks. In Proceedings of PLDI (May 2001).
[6]
Lampson, B., and Hennessy, J. Decoupling Voice-over-IP from sensor networks in Markov models. In Proceedings of IPTPS (Aug. 2002).
[7]
Leary, T., and Ritchie, D. Improving virtual machines and SMPs. Journal of Electronic Methodologies 5 (Oct. 2000), 20-24.
[8]
Lee, D., and Newton, I. Visualizing architecture using ambimorphic symmetries. In Proceedings of the Conference on Bayesian Symmetries (Dec. 1997).
[9]
Milner, R. Von Neumann machines no longer considered harmful. In Proceedings of HPCA (Mar. 2005).
[10]
Needham, R., Zhou, L., and Wilkes, M. V. The impact of amphibious algorithms on operating systems. In Proceedings of MOBICOM (Jan. 1995).
[11]
Shenker, S. A case for architecture. In Proceedings of the Workshop on Extensible Algorithms (July 2000).
[12]
Smith, a., Garcia, V. S., Jr, D. A. G., Jr, D. A. G., and Taylor, V. Deconstructing the Turing machine. In Proceedings of the Workshop on Authenticated Models (Apr. 2001).
[13]
Wu, L., and Martinez, R. A study of the UNIVAC computer. In Proceedings of SIGMETRICS (Apr. 1993).
[14]
Zheng, B. Client-server, classical modalities. In Proceedings of the USENIX Security Conference (Oct. 1997).