Graphics Programming Language

Hi!
I'm looksing for a simple GPL written in java. Statement's should be something like:
-addLine l1 at ...
-wait for 0.5s
-remove l1
-zoom in region at ... (moves smoothly to the new area)
-add plot of function ...
Does such a language exist?
Tkanks in advance!

Hi Daniel,
Have a look at URL
http://help.sap.com/saphelp_nw04/helpdata/en/0c/5c55a8d0a611d2963100a0c9308b1f/content.htm
Also have a look at SGRB Package/deveoplment class.
Thanks
Lakshman
Message was edited by: Lakshman Tandra

Similar Messages

  • What are the programming language used in adobe livecycle development?

    hi friends,
                    I am a fresher.I woundered what adobe has done in livecycle import option.so i want to know wht is the programming language used and how they read graphical fields like line,circle.

    Programatically, Java is the language used to implement business logic that would plug into LiveCycle.  I believe you can use C# for older versions.  However, that information is irrelevant to your problem.
    That being said, it's typically not advisable to be modifying PDF files generated by different applications (iText, Adobe LiveCycle, etc.).  While both documents may render out as "PDF", they will not be created internally in the same manner.  I would advise that it may be easy to add PDF files generated by other applications as attachments, they are not going to be easily handled unless you address each specific document in a custom manner.

  • What programming language should i learn?

    Hey! Im new to Archlinux, and linux aswell. I have a friend, which make lots of cool stuff in Archlinux. He says he use c++ etc. I havent readed so much about Archlinux, so i wonder, where should i start? What is the easiest and coolest? Anyone want to tell me?
    BTW: What language does "terminal" use?
    Could you add a tutorial link too, which YOU mean is a good tutorial?
    I really hope for an answer, aswell this is my first post on this forum.
    Have a good day.
    Kim

    Don't start with programming. Move that to your medium-term to long-term goals. If you've just switched to Linux, you have a lot of learning you need to do. UNIX is a hugely flexible OS, but with that comes responsibility, so you need to know how to wield your sword well before you wield it.
    Head to the Arch Wiki (http://wiki.archlinux.org/) and read the beginner's guide. Read this. Read that. If you don't know something or it looks interesting, read it. If you don't understand something, google it. Wikipedia is typically a good source of info; head there for your best bet at a definitive answer to/for something.
    And about the terminal... it doesn't use a language. Things just print to it. And read stuff from it. That's it. Read about escape sequences to learn about how programs control (almost) all available aspects of the terminal.
    AFTER you've read the wiki, played with (and even maybe broken/fixed) your system a little... take a look at the various languages out there, and pick the one you like the most. There are too many to count, really.
    In the end you're going to have to use something like C because C is the UNIX language and you can't escape it, so getting used to the C syntax will benefit you in the end. Try reading about pointers (head to wikipedia for that one for sure but NOT wikibooks, the wikibooks article is really bad even though I tried to help it a bit) and if you can get your head around those, try skipping all the other languages out there and going straight to C.
    Regardless of whether you use C at first, just know this: C is very low-level and is highly expressionistic. You have to input a lot of code to get what you want done with C, but not so much with other languages. C is, however, the second-fastest language out there, following assembly language. So, when you need speed, use C. However, for quick one-time executions or various system tools or utilities, you can usually get by with the shell or something like that.
    In my opinion, here are a list of languages sorted in order of simplicity:
    Shell scripting - built into your shell. Very simple to use but follows a rather interesting and highly loose structure.
    PHP - many people will call you a wuss for using this, but I used it for months and it was great. Its support for graphical programs is very poor, so it's best kept to web- and shell-oriented scripting.
    Ruby - this follows an almost English grammar, so is very readable and learnable. On the other hand, it's known to be amazingly, amazingly slow.
    Perl - from how I've seen this used, I'd say that people would agree with me calling this UNIX's scripted, interpreted alternative to C. Perl is also written in Perl itself, which is quite a nice feature. You can write almost absolutely anything in this, but don't expect it to be too fast - Perl is quite a bit faster than Ruby, but not nearly as fast as C.
    Python - Google use this for their help center thingy, and so do a lot of other projects. It requires you use indentation for formatting, however, so you can't make one-liners with this. It's HIGHLY structured, and from my perspective best for apps that need to be extended over time.
    C++ - not much I can say for this one, except that it's like C, but OOP.
    C - this is THE definitive, de facto UNIX language. If you ever write something remotely successful on a large scale that's small, fast and stable, it will almost likely be in C, and will most likely have taken a huge number of hours off of your hands before it was completed.
    Assembly - this is more complex than C. I won't discourage you from having a look at the Wikipedia article on this, but don't worry if it takes you a long time to "get" it
    Just my two cents.
    -dav7
    Last edited by dav7 (2008-10-16 19:15:28)

  • What generation of programming language is JAVA

    For C is 3GL.
    Then what about JAVA?
    Thank you

    1.The 1stGL or first-generation language is machine language, strings of 1's and 0's that make the whole system work!
    2.2GL or second-generation language is assembly language. A typical 2GL instruction looks like this: ADD 12,8
    3.3GL or third-generation language is a "high-level" programming language, C or Java. A Java language statements look like this:
         public boolean handleEvent (Event evt) { switch (evt.id) { case Event.ACTION_EVENT:
    { if ("Try me" .equald(evt.arg)) { A compiler converts the statements of a high-level programming language into machine language.
    4.4GL or fourth-generation language is designed to be closer to natural "spoken". A 4GL language statement might look like this: EXTRACT ALL CUSTOMERS WHERE "PREVIOUS PURCHASES" TOTAL MORE THAN $1000 5.5GL or fifth-generation language is programming that uses a visual or graphical development interface to create source language that is usually compiled with a 3GL or 4GL language compiler.

  • Illustrator CC paste in program language

    When I copy in Ill-CC the program will paste whatever was copied as program language - but only the first time - second time I copy & paste, everything is ok....?

    Hi Ken
    Found the answer - it seems that Illustrator interacts with a Yahoo Widget called Scribbler (widget that remembers everything that has been copied) - everytime I copy anything (eg. graphic elements) in Illustrator CC, it turns up in Scribbler copied as program language, and for some reason the first paste in Ill-CC is that program language copy - second time copy and paste in Ill-CC is ok.
    Problem is solved when Scribbler is turned off.
    Kind regards

  • The programming language of the future: a philosophical question

    So after reading some of these posts and reflecting on the history of programming languages, I find myself wondering what the programming language of the future will be? As the human-computer interface evolves (touch interfaces, voice recognition, computer vision, etc) how will this influence the process of creating computer programs? Will programming become more graphical (e.g. flow charts, UML diagrams, etc)? Setting ego's aside, does it make sense for programming to be more graphical? Will we reach a point where we can essentially just sketch out a concept and let the compiler/interpreter handle the rest?
    Perhaps this question is best left to Clark and Asimov, but they're dead. This leaves you guys.
    (Moderator, I apologize in advance if this is out of scope for this forum.)
    Last edited by bsilbaugh (2011-11-23 01:15:47)

    geniuz wrote:I think the mistake of the scientist in question was not that he re-derived the Trapezoid Rule, but that he published it as if it was a new invention, while it existed and was already published in some form. That certainly could be prevented by reconciling to literature before publishing. In fact, I'd not even blame the scientist in question too much, rather the commission that approved his paper to be published in the first place.
    Sure, he didn't publish the paper in isolation - but the peers who reviewed it would have been people from the same or a related profession. My point is that this kind of error results from splintering of the disciplines. You seem to think that the splintering should be somehow undone or reversed at the level of the peer reviewer or the publisher - but non-specialists (or specialists in other areas) will not be invited to comment, because it will be assumed that they lack the qualifications to do so.
    geniuz wrote:Still, I think its a long stretch to compare the derivation of a Mathematical rule or method to the invention of a complete DSL.
    I think I didn't make it clear where I was going with that. My point was that splintering of specialisations from each other permitted this duplication of effort to take place. I see DSLs as a way to splinter disciplines from each other. Thus, the rise of DSLs would make it easier for duplicate effort to take place.
    geniuz wrote:The whole point of a DSL is that it allows experts in a specific domain to focus their efforts on creatively solving problems directly related to their field of expertise. When specific DSL's are integrated in university curricula world-wide, I hardly think there will be much duplicate effort going on.
    Sorry, but this seems a little naive to me. I can see why you might think DSLs would permit specialists to more efficiently focus on solving their particular problems, and I don't entirely disagree, but even if it's true, I don't think it's controversial to think this will lead to considerably more duplicate effort.
    geniuz wrote:Also, in the world of OSS, there are multiple tools that can perform the exact same job. Do you for instance consider the existence of both of the mail clients Mutt and Alpine as duplicate effort?
    I haven't used Alpine, but if it's exactly like Mutt, then sure, I would certainly say that's duplicate effort. The more people who use one single piece of software, the more bugs can be filed against that software.
    Also, I think this is a false analogy from the start. Mutt and Alpine are both written in C. If we're talking about the connection between duplicate effort and DSLs, let's consider the fact that "communicating and storing computational models of biological processes" (from the Wikipedia page on SBML) could be better done in Lisp, Lex/Yacc, or with the Lemon C++ library, instead of coming up with some new way of using harmful XML.
    geniuz wrote:I still don't quite see how this can be interpreted as a necessary weakness or argument against more complex systems. Sure, DSLs can be dependent upon lower level languages, but if they are considered to increase the effectiveness and efficiency of certain experts, what exactly stops them from becoming dominant and continually evolving?
    The fact that higher and higher levels of specialisation are concomitant with smaller and smaller user bases who still need to communicate with experts in other groups. I'm not saying DSLs should never be used. I just think they are essentially self-limiting, and just as bacteria will long outlive us more complicated life-forms, so too will the lower-level languages long outlive more cumbersome DSLs.
    geniuz wrote:I wasn't implying computers will ever be able to mimic the human brain, and I'm not even sure whether it is something we necessarily want to strive for. All I was saying is that computers have already become indispensable tools in virtually every scientific and engineering discipline. They are computationally strong machines able to solve numerically involved problems at rates no human can ever hope to accomplish. It is this very aspect that will continue to guarantee the succes of computers, not AI per se. Again, I believe computers will never (at least not while I'm alive) be able to truly independently mimic and outperform the human brain, especially when it comes to aspects like creativity, i.e. the very aspects of human intelligence scientists have not even been able to understand and quantify to this date. Hence, humans will always remain "in the loop" to a large extent.
    This all seems reasonable.
    geniuz wrote:Don't forget that laws of physics are "laws" that have been defined and created by humankind for its own convenience. Even recently this year, practice has shown that a concept as fundamental as the speed of light might not be as accurate as it was so widely acknowledged by the scientific community. This however hasn't stopped mankind from using these fundamental "laws" to invent e.g. radio communication and electronic devices.
    I don't think you're disagreeing with me. I'm aware of the fragile nature of what we call the laws of physics - but like you say, they're good enough that we can do things with them. It looks like our knowledge of the speed of light breaks down on really large length scales, and it looks like our knowledge of gravity breaks down on really small length scales, but the brain is in the middle. In between, our models for how the physical world works are very accurate, and it is in this regime that the brain operates.
    Also, you should note that most scientists regard those neutrino test results as residing within the bounds of experimental error, and therefore not strongly indicative that the neutrinos really did break light speed.
    geniuz wrote:I haven't said low-level languages will be abandoned completely, I think they will remain to serve their purpose as a base upon which higher level languages (like DSLs) are built. In that sense, I believe that the user base of these low level languages will become more limited to computer scientists, i.e. to the people responsible for "formulating suitable abstractions to design and model complex systems" (source).
    Seems reasonable.
    geniuz wrote:Having said that, I still don't see why it so farfetched that for the rest of the world, physically telling a computer what to do in their native tongue as opposed to typing it in some generic-text programming language, will become the de-facto standard. Hence, I will reformulate my statement by stating that programming as most people know it today will eventually become a redundant practice.
    I don't think a friendly human interface should be considered the same as programming. This thread, if we recall the OP, is about programming languages of the future, not user interfaces of the future. I certainly agree that user interfaces will become more intelligent and attractive. I only disagree that this will have any strong impact on how we do programming.
    Last edited by /dev/zero (2011-12-12 19:20:59)

  • The birth of graphical programming.

    I recently found this video, which describes the work Bert Sutherland taken as part of is PhD on graphical programming.
    It's really fascinating to see the source of many of the conventions we now use in LabVIEW; what do you guys think?
    Alex Thomas, University of Manchester School of EEE LabVIEW Ambassador (CLAD)

    Hornless.Rhino wrote:
    That gives me an idea..... that would get me sued into oblivion.
    That's a pretty interesting point... I mean, G is a programming language. I don't understand why it should be considered an infringement of copyright or patents if another application takes the conventions established in LabVIEW and reuses them. I mean, a high level While Loop looks the same in any text language, right?
    I suppose the reason I mention this is because over the past year I've been working on a project to create a Java-based G-based programming environment that could be used to generate abstract text code (The example language was Verilog) as part of some individual project we're supposed to work on within university. Originally I was going to do this from within LabVIEW and use VI Scripting to parse the Block Diagram but I ran into some problems when developing a clean architecture that was flexible enough to perform different actions based on a specific GObject instance it was dealing with, so although there was a huge overhead in developing the graphical environment from scratch, I figured it'd lend itself to more specific customisation later on. I ended up learning a great deal too.
    The project has been a success but I've talked to some of the field guys from NI and we decided that it'd be unsuitable for me to put it online. Originally I wanted it to be open source because it would serve as a cute attempt to understanding how LabVIEW works; but it's understandable as to why I can't do that. It must look really strange to them considered I worked with them as an intern for a year... I have to confess that the whole time I spent in technical support, I was really longing to see what LabVIEW was doing in the background and never had the chance to. This project truly felt like my only real chance.
    The way it worked was so that rather than just Verilog, you could pretty much override the conversion stage at any level and use any text you want with it. Because it had this format, things like the re-entrancy would need to be explicitly coded from within the diagram. There was no Front Panel either.  Any newly developed graphical code would have a textual equivalent for any supported language... At the moment, for the few graphical blocks and structures I have, there's a C and Verilog equivalent. It was never my intent to cause any harm to LabVIEW, because I love it. It's just that outside of these forums, any mention of LabVIEW online is quickly followed by complaints about the pricing, and I wanted to be able to give these people a chance at seeing why it's so good. Giving people the chance to play around with G in their own time, not on someone else's.
    "I can open up Notepad and write any text code I want in there. Why can't I open up Microsoft Paint and do the same with LabVIEW?".
    I read that comment on YouTube somewhere. It really made me laugh.
    I firmly believed that the project could have been that foothold for those users who have yet to discover the benefits of graphical programming but don't have the budget. You'd get more people using G, and more people having success with LabVIEW.
    People on the forums mention a lot about how they wish it wasn't closed source and I thought it'd be a great example to toy with. What I really started to enjoy was the fact that I could start putting my own little tweaks into the application that weren't present in LabVIEW; it could grown into a little  project for users to benchmark their own changes to the IDE. I had things like only being able to make a wire connection between compatible terminals (Which I'd then build on so that it would ghost a potential bridge between initially uncompatible terminals, like a build array between a scalar value and an array input) and tried to get a head start on implementing Altenbach's Synchronizer Bar.
    Anyway, it was a nice run. I thought I'd share this little endeavour with you to see what you think. I know that NI are a great company that really embrace the outer communities but I have to admit that I was a little heartbroken when my pet project didn't get the same kind of response, although in hindsight it was plain silly of me to think otherwise.
    Alex Thomas, University of Manchester School of EEE LabVIEW Ambassador (CLAD)

  • Java Programming Language questions...???

    Hi everybody....
    Can I post here my questions about Java Programming Language....or only to to the relevant Sun's forum....http://forum.java.sun.com/index.jspa???
    My greetings,
    Simon

    Simon,
    sure, the worst thing that could happen is that people point you to the SUN forum. Usually this forum answers general Java questions.
    Frank

  • Java programming language main method question?

    Hello everyone I am quite new to the Java programming language and I have a question here concerning my main method. As you can see I am calling 4 others methods with my main method. What does the null mean after I call the method? I really don't understand is significance, what else could go there besides null?
    public static void main(String[] args)
              int cansPerPack = 6;
              System.out.println(cansPerPack);
              int cansPerCrate = 4* cansPerPack;
              System.out.println(cansPerCrate);
              have_fun(null);
              user_input(null);
              more_java(null);
              string_work(null);
         }Edited by: phantomswordsmen on Jul 25, 2010 4:29 PM

    phantomswordsmen wrote:
    ..As you can see I am calling 4 others methods with my main method. 'Your' main method? Your questions indicate that you did not write the code, who did?
    ..What does the null mean after I call the method?.. 'null' is being passed as an argument to the method, so there is no 'after the method' about it.
    ..I really don't understand is significance, what else could go there besides null? That would depend on the method signatures that are not shown in the code snippet posted. This is one of many reasons that I recommend people to post an SSCCE *(<- link).*
    BTW - method names like have_fun() do not follow the common nomenclature, and are not good code for a newbie to study. The code should be put to the pointy end of your sword.

  • Java programming language

    What are the major differences between Java programming language and HTML or XML and why is it better?

    Thanks for your response but I am dumb when it comes
    to this stuff. I am taking an information systems
    class and the professor is asking us to list three
    different items that makes Java different from other
    programming languages used and I thought posing that
    question to this form would provide me the
    information I was looking for. The text does not
    state the answer he is looking for. Any help would be
    great.
    ThanksI'd assume your professor is asking you to do something called "research" where you go read stuff about it/them. What you're doing is called "just ask someone to tell some answers to turn in as my own", or in other words, "cheating".

  • Java programming language uses call by reference for objects?

    Is Java programming language uses call by reference for objects?

    Yes. You make calls to an object via itsreference.
    No.Yes, you're referring to passing a reference into a
    method in which case the value of the
    reference is passed.I believe the OP is using the term "call by reference" to mean "pass by reference." The two are interchangable, AFAIK. So, while "making calls to an object via its reference" is correct, I don't believe it's germane to the question.

  • Choose a programming language

    I want a program that can
    analyse http://www.youtube.com/browse?s=mr&t=&l=&e=en_US&p=*  (*=1-5) every 10 minutes
    then pick out  the link that start with http://www.youtube.com/watch?v=
    export their related information and their links to a form that I want
    sort them as their youtube added time
    and it can run on my webpage
    so what programming language should I choose to write this program?

    Your program consists of two parts, to query the site in a given interval and to display the videos/information on your website. I think PHP would suite best for displaying the videos on your page, but then, if you have a python/ruby powered website then use the one you make your website with. For the query part I would use python and write a small script that gets triggered via a cron job in the interval you like. Instead of parsing the contents of the page itself I would use the RSS feed provided by the youtube page, there's a python module named "feedparser" with which you can easily access the feed contents and extract the related information (shouldn't be more than maybe 50 lines of code), write it to a database (sqlite), in the form that you like, and retrieve the contents via your PHP script on your webpage. Or completely skip the time interval query part and just check for new videos when someone visits your website .
    Last edited by chimeric (2008-01-19 11:12:56)

  • Where can I get a copy (hardcopy or soft) of apple's book "The Objective-C Programming Language"

    I have recently began apple's course "Developing iOS apps" To get through a section I need a copy of Apple's book "The Objective-C Programming Language" Does anybody know how I can get a copy? (I live in Australia) It is really important.

    You use to be able to get it in the iBook store, that's where I got my copy. They had a whole section for Apple Developer Publications.  But I just looked and I dont't see it there now. Not sure is Apple pulled it from the store.

  • How to know if firefox is using a master password from a some programming language?

    I am working in a security check project and I need know from an programming language or open some file if firefox is using a master password.

    Generally speaking, there is nothing in the HTTP request to indicate to a server the type of window the request originates from. So you will have to manage it at the application level.

  • Java is pure object oriented programing language or not why?

    please clear
    java is pure object oriented programing language or not why?

    And there is some concepts of object orientation that
    Java not implements like: Operator
    Overloading and Multiple Heritage. But, i think
    that we can live without those features.
    And the sucess of Java is a proof of this.I don't believe that operator overloading and multiple inheritance are required aspects of object programming.

Maybe you are looking for

  • Unable to print from my Iphone to my eprinter

    I can login to my HP eprint account and see my printer and print to it. I can also print to my printer on my WIFI from my iPhone 5. The problem is I can not print to it from my iPhone when I'm alway from my office? At one time I was able to do this.

  • Javascript code to retrieve metadata - must run twice to work

    Hi all This is frustrating the hell out of me. I've got a script to batch add headers/footers (via watermarks) to a range of pdf documents by using the respective document's metadata (title, subject). Checked everything: Script running ok, metadata a

  • Dialog programming - bring screen in front

    I have a requirement where I am supposed to call a module pool screen in search help exit  of material using call transaction. It is working when i use F4 help from MM03, but when i am using the same search help from a custom sceen, the new screen op

  • How to Integrate with FI

    Hi, we have created po, with out material no then at the time of good receive what are the things we have to do for the integration with FI Regards Niv

  • External system to SRM

    Hi SRM specialist <u>Scenario:External system --> SRM</u> Now we are examining how to do the Interface. Firstly PO data is created in External system and this data will be needed to transfer to SRM. <u>Question</u> What can we do when we will do the