jueves, 17 de octubre de 2019

Technical Overview of the Common Language Runtime

The paper makes a comparison between the Java Virtual Machine (JVM) of Java and the Common Language Infrastructure (CLI) of Microsoft .NET, arguing that although almost of the developers moved to the JVM as their vehicle for their languages it wont be the best option for all of them as it is for Java. The paper starts listing some of the attempts to develop virtual machines, intermediate languages and language independent execution platforms. It is awesome to know about this because I did not know anything about them and I suppose there may be a good reason, well the paper mentions that the reasons to watch for another options are the portability, the compactness, the efficiency, the security, the interoperability and the flexibility.

The CLI has been designed to overpass the problems that the JVM may have (for example the capability of unboxing structures and unions, etc.…), this with the help of different implementers of different languages. The paper also describes some of the parts of the architecture of the CLI in order to understand a little bit more of how it works inside, the remarked part is that in contrast to the JVM, where all storage locations are not “polymorphic”, as they are within CLI, which means that their size can me user-defined, but they are fixed for lifetime of the frame

Later, the paper describes a little bit more some of the CLI’s features, the type system section describes some of the primitive types that are supported and letting us know that they can be combined into composite types. The base instruction set section, describe some of the most representative instruction of each group, and exposes that unlike JVM, the CLI do not hard-code within the instructions the type of their arguments. Finally, the last parts are about the reference and value types, how they interact and about invoking methods. These sections describes some interesting facts about how they work, for example the ways to indirect call a function pointer.

domingo, 13 de octubre de 2019

Building Server-Side Web Language Processors

The article talks about how to build a server-side web language processor, the author give an example of this, through the building of java applets and running them with a web browser. Also the author explains that this may be desirable because some instructors at certain colleges need to teach using a compress approach, I wonder if this is something as the new educative model that the Tecnológico de Monterrey has adopted (the Tec21 model), because the description mentions that this is a union of the application of different subjects to learn about them within a single course.

The provided description for the language processors is that they “allow us to run programs or prepare them to run”.  The locations for our language processor according to the proposed architecture can be the client, and the server. And the recommended features for the language are basic configuration, compact syntax, dynamic types, garbage collector and direct support for high level collections (strings, dictionaries, etc...). The point here is to do some computations and later produce an output in the form of a HTML, XML or plain text. Then we need to consider the presentation of the code, we have to different types of elements (static and dynamic), the first way is to write static elements (html tags) within the dynamic types (language instructions), and the second way is to embed we language code inside the static notation, this is known as a template view.

To accomplish our goal is to know how http works, which is a request/response standard between a client and a server. Here are two parts, the first one, establishing a transmission control protocol (TCP) connection with a port on a host computer, and the other one with a server listening a port waiting for requests to answer them. We also need to consider the use of the different types of scopes for the variables, which can be local, page, request, session and application

viernes, 4 de octubre de 2019

Language Design and Implementation using Ruby and the Interpreter Pattern

The paper talks about how to use the S-expression interpreter framework (SIF) to teach language design and implementation, which is written on Ruby (described later on), the SIF has a simple core that can be extended (we will talk about this later, specifically about how can be extended to support functional programming or imperative programming). The “S-expressions” means symbolic expressions, and this is a parenthesized prefix notation used in lisp family languages, for example ( + a b), this is interesting because I did not remember this concept from the course where we learned Clojure ands it is good to remind it.

The SIF works as the interpreter pattern (one of patterns of the “Gang of Four”), and as I said it is written in Ruby which is an interpreted, dynamically typed language, which syntax borrows from Eiffel, Ada and Perl, and its object oriented based in spirit of Smalltalk. One of their most important components is the node class and their subclasses (I think this is related to the third phase of our project, where I saw that we will use the node class and implement subclasses from it). I did not understand one hundred percent fine how this class works, but I understand that it checks in part the syntax and semantics, something that we have already checked for our project.

Finally, the paper talks as I said before, how we can extend the functionality of the SIF to be able to use it to interpret functional programming languages, this through defining some special “forms” (quote, define, if and fn) to be capable of “reading” its grammar. And in other hand the author mentions how to extend it to be useful for imperative programming languages, defining the special “forms” (set! and begin) and a new class called “environment”.

viernes, 27 de septiembre de 2019

Mother of Compilers

The article and the video are about the Rear Admiral Grace Brewster Murray Hopper, who is known as the “mother of COBOL”, and some of the most important events of her life and their contributions to the IT field. Even when the history of computer hardware is as the article says mostly male-oriented, the contribution of Mrs. Grace to this field can be comparable to the contributions of other IT revolutionaries. I didn’t know who Mrs. Grace was, it is true that I did not know how much she contributed compared to the contributions of Steve Jobs, Bill Gates or Alan Turing. Also, I did not know that the first programmer was Ada Lovelace.

It is interesting that she disassembled clocks in order to know how it works, also funny to know that she ended up disassembling another six clocks, it reminds me when we “disassemble” code to know how it works. It is also impressive that she studied math, physic and engineering at the Vassar College at New York and earned a master and a doctorate in math and mathematical physics. The surprises continue, I know the story about how the term “bug” was invented, but I did not know that was her and her team whom invented it.

Finally, other of the import events of her life was their inventions, the construction of the Mark I, an electro-mechanical computer, the competition of the UNIVAC and the programming of the BINAC, which was a binary machine build for the Snark Missile project. The write of the A-0 compiler and finally the develop of COBOL. She has earned multiple awards, I feel bad, because I really did not know anything about her and her contributions, even when are important part of the main story of the software. I hope to read more articles about her, and maybe about the develop of the A and B compilers

domingo, 8 de septiembre de 2019

Internals of GCC

The podcast was interesting, I think I’ve not used the gcc until I started my career, but I remember when I was younger, sometimes I watched my uncle (who also studied computer systems engineer) use it (I didn’t know what it was) and it was something incredible for me. Don’t misunderstand me, I used to think that it was something very complex and it scared me, because as a child all what I used to do with a computer involved a GUI. Nowadays I don’t know all about it, and who can? But its interesting to know about it, because it makes me feel that this can help us to get a little bit closer to the “heart” of a computer and what makes it works as we know. 

The podcast talks about the three parts of compilers, the front, middle and back. Each layer works for its own and is isolated from their neighbors, but they can communicate between them in order to make possible the compilation process. Also, the podcast explains why this process is important to give the programs the “ability” to be portable and don’t be constrained by the hardware. This, as we know, to allow developers produce software without being worried about the compatibility of their product with other architectures (in most cases)

Finally, the podcast remarks that maybe we don’t need to know how a compiler works, and that’s something that we have listened before (as at the title of the previous article that we readed “Making Compiler Design important …”). I think this is true, in most of the cases we really didn’t need to know how is the compiler designed or how it works “inside”, but as the previous article mentioned, maybe we can found something interesting and useful for other problems that we may face. Its something as the mechanic engineers knowing how works a fuel engine, but no how it’s designed and how they can make one.

jueves, 29 de agosto de 2019

The Hundred-Year Language

The article talks about the “evolution” of the programming languages, the author makes an analogy between the Neanderthals and the “primitive languages” as Cobol, and how they were replaced by “more adaptive” versions. The author thinks that nowadays maybe the next specimen that will be “taken down” is Java, because there are appearing more languages that can adapt better to the needs of the hardware and programmers. The author continues the analogy explaining that as at the evolution theory, there are branches that may exist between different languages, but in this case, they are more complex and occur slower. Going deeper within the last point, the author explains that one factor that makes this happen is that the languages are notation, not technology.

Later, the article continues listing the two components of a language, the axioms (operators-like that are used to write the rest of the language which is the second part). In other hand, the author mentions the Moore’s Law and how it maybe stops working at the future due to the incapability to expand as much as the Moore law points it should do. Another interesting mention is the thumb’s rule which I didn’t know, that specifies that each translate layer between the hardware and main application cost a factor ten to the execution’s speed. The article continues exposing multiple examples of how different tools were developed being non-efficient (Lisp initial design, ARC, etc…) but how they  taken advantage from the bottom-up programming, what is writing series of layers , each one providing the base programming to the layer above.

Finally, the article also talks about some interesting points, as the parallel computing and some important points if we are considering designing a new language. For the first point, the author mentions that he thinks that it would be possible at the future, but purely it would be just for certain applications, and the other applications would need to be pass to a first version and later “mapped” to it’s optimized version for parallel computing. The “tips” to develop programming languages, mention that we need to keep in mind our target when developing, what type of program we want to be able to write and the size of the parser tree.

martes, 20 de agosto de 2019

Making Compiler Design Relevant for Students who will (Most Likely) Never Design a Compiler

The article is from a professor from the Department of Computer Science at the University of Arizona and list some points to attempt to make more attractive the course of compiler design. The article starts telling us which was one of the first exercises that the author left to a course of compiler designer, honestly it sounds complicate and the author confirms this when says that the point of the exercise was to give a first approach to the students to the use of lex & yacc. But returning to the main point of the reading, we can highlight that the examples used are simple to understand, but maybe in practice are hard to implement. I wonder what would be the practices that we will be doing at our course. 

I liked that the reading let me understand better the translation phases that we discussed the last class. As I understand the steps to do a translation, the order and “output” of each phase is the following:

1. Lexical Analysis and Parsing: Phase that takes a string and divides it into tokens (words, punctuation, etc..) and the parsing is the process to give structure to the tokens 
2. Semantic Analysis: Phase that works and then propagates information that is not part of the context-free syntax of the language (this means that the output of phase 1 is processed and distributed to the some "mechanisms" to match some rules 
3. Code generation: This phase process the tree representation of the program (starting with child nodes and doing operations with the father nodes “traversing”) and generates machine language
4. Code optimization: This phase attempts to reduce the cost of the generated code which can be energy usage, time consuming or size 

I don’t know if I’m missing or misunderstanding something, but I consider this is the more basic explanation and easiest way to understand the translation

lunes, 12 de agosto de 2019

First Entrance

Hi, my name is Ricardo:

I study ISC, now I'm at 9th semester, I like videogames, specially the shooters and sandbox,  more recently the fights videogames too (MK11), I still like reading dramas and sci-fi books and play basketball and football, I also like to go out to parties, cinema, eating wiht my friends or my girlfriend, with who I have been for almost 2 years and a half. Additionally, now I'm about to accomplish my first year working at CSQTech, I've enjoyed it so much, I have learned a lot of things and I hope to enter to a project full time after graduating. I'm going to graduate (if all goes right) this semester, I feel kinda nervous and scary because the time passed so fast but at the same time I'm very excited and trust myself to accomplish this and start living new experiencesoutside of school.

I expect to learn to the deep the function of compilers, and learn a bit more about C#, and well, I hope to pass the legendary exam successfuly and learn the basics of how to develop a compiler and pass the course.