Data Visualization and Graphs for JSF

Are the Data Visualization and Graphs for JSF included in this release. I saw the demo here http://www.oracle.com/technology/products/jdev/11/index.html and want to try it out. But I could not seem to access it in the tool.
Regards
Orlando Kelly
Cayman Islands Government

<p>
Hi,
</p>
<p>
It seems you have no suitable libraries in your ViewController project. See on this image whet libraries you need.
</p>
<p>
Kuba 
</p>

Similar Messages

  • How to use same Data Type and Length for two fields

    How to use same data type and length for two fields when using 'FOR ALL ENTRIES IN' in a select statement? For instance the select queries are :
    SELECT bukrs gjahr belnr lifnr budat bldat zlspr dmbtr waers shkzg
    FROM bsik
    INTO TABLE it_bsik
    WHERE bukrs = p_bukrs
    AND lifnr IN s_lifnr.
    IF it_bsik IS NOT INITIAL.
    SELECT belnr gjahr awkey awtyp
    FROM bkpf
    INTO TABLE it_bkpf
    FOR ALL ENTRIES IN it_bsik
    WHERE belnr = it_bsik-belnr
    AND gjahr = it_bsik-gjahr.
    IF it_bkpf IS NOT INITIAL.
    SELECT belnr gjahr lifnr xblnr
    FROM rbkp
    INTO TABLE it_rbkp
    FOR ALL ENTRIES IN it_bkpf
    WHERE belnr = it_bkpf-awkey+0(10)
    AND gjahr = it_bkpf-awkey+10(4).
    ENDIF.
    ENDIF.
    Here it gives an error in the 3rd select query that 'When you use the addition "FOR ALL ENTRIES IN itab", the fields "GJAHR" and "IT_BKPF2-AWKEY+10(4)" must have the same type and the same length.'
    Kindly clarify.

    Hi Saurabh,
    Please see the example code that I have developed for you. It will help you solve the problem.
    REPORT ZTEST_3 .
    tables : BKPF.
    data : begin of it_bkpf occurs 1,
             belnr type RE_BELNR,
             awkey type awkey,
             awtyp type awtyp,
             gjahr type GJAHR,
           end of it_bkpf.
    data : begin of it_bkpf1 occurs 1,
             belnr type RE_BELNR,
             awkey type gjahr,              " change the data type
             awtyp type awtyp,
             gjahr type GJAHR,
           end of it_bkpf1.
    data : begin of it_rbkp occurs 1,
             belnr type BELNR_D,
             gjahr type gjahr,
             lifnr type LIFRE,
             xblnr type XBLNR,
           end of it_rbkp.
    select belnr
           awkey
           awtyp
           gjahr
           from bkpf
           into table it_bkpf
           where BUKRS = 'TELH'.
    loop at it_bkpf.
    it_bkpf1-belnr = it_bkpf-belnr.
    it_bkpf1-awkey = it_bkpf-awkey+10(4).           "Here only append the required length.
    it_bkpf1-awtyp = it_bkpf-awtyp.
    it_bkpf1-gjahr = it_bkpf-gjahr.
    append it_bkpf1.
    clear it_bkpf1.
    endloop.
    select  belnr
            gjahr
            lifnr
            xblnr
            from RBKP
            into table it_rbkp
            for all entries in it_bkpf1
            where belnr = it_bkpf1-belnr
    This is just an example. Change the fields according to your requirement.
    Regards
    Abhii
    Edited by: Abhii on Mar 9, 2011 9:08 AM

  • Upcoming Lumira Webinar June 11th on the topic of "Big Data Visualization and Custom Extensions"

    The next webinar in the Lumira series is coming up this week on Wednesday, June 11th, 10:00 AM - 11:00 AM Pacific Standard Time. The day is almost here and if you've already registered, thank you!
    If not, please click here to register.
    Speaker Profile:
    The topic is presented by the Jay Thoden van Velzen, Program Director Global HANA Services/Big Data Services Center of Excellence at SAP!
    Jay has been working in Analytics/Business Intelligence since it was called Decision Support Systems in the late 90s. Currently he is focused on Big Data solutions and how to make the various components of such a solution run smoothly integrated together using the SAP HANA Platform. 
    Abstract:
    Big Data analysis poses unique and new challenges to data visualization, compared to more traditional analytics. Such analysis often includes frequency counts, analysis of relationships in a network, and elements of statistical and predictive modeling. In many cases, traditional visualization techniques of bar- and column charts, pie charts and line graphs are not the most appropriate. We have to avoid the “beautiful hairball” and make it easy for end users to absorb the information through clever use of filtering, transparency and interactivity. We will likely also need to provide more context to go with the visualization than we have been used to in traditional analytics. Moreover, in case of forecasts you need to include any confidence intervals in order not to mislead.
    This means we need more chart types, and often the chart types you need may not exist, nor could the need for such chart types necessarily be foreseen. However, Lumira allows us to design and code our own D3.js visualizations and integrate it into Lumira while providing all the data access methods – including SAP HANA – that it provides out of the box. This means we can develop our visualizations to share the outcomes of Big Data analysis to exactly how we feel it should be presented. During the webinar we will show a number of examples, and specifically the integration of forecasts coming from R into Lumira through a Lumira custom extension.
    We really hope to see you there!
    Cheers!
    Customer Experience Group

    Congrats to Joao and Alex!
     Microsoft Azure Technical Guru - May 2014  
    João Sousa
    Microsoft Azure - Remote Debbuging How To?
    GO: "Clever. Well Explained and written. Thanks! You absolutely deserve the GOLD medal."
    Ed Price: "Fantastic topic and great use of images!"
    Alex Mang
    The Move to the New Azure SQL Database Tiers
    Ed Price: "Great depth and descriptions! Very timely topic! Lots of collaboration on this article from community members!"
    GO: "great article but images are missing"
    Alex Mang
    Separating Insights Data In Visual Studio Online
    Application Insights For Production And Staging Cloud Services
    Ed Price: "Good descriptions and clarity!"
    GO: "great article but images are missing"
    Ed Price, Power BI & SQL Server Customer Program Manager (Blog,
    Small Basic,
    Wiki Ninjas,
    Wiki)
    Answer an interesting question?
    Create a wiki article about it!

  • Help In keithley 2400 VI!!(Problem with the data logging and graph plotting)

    Hi,need help badly=(.
    My program works fine when i run it,and tested it out with a simple diode.The expected start current steps up nicely to the stop current.The only problem is when it ends,i cannot get the data log and the graph,though i already have write code for it.Can someone help me see what's wrong with the code?I've attached the necessary file below,and i'm working with Labview 7.1.
    Thanks in advance!!!
    Attachments:
    24xx Swp-I Meas-V gpib.llb ‏687 KB

    Good morning,
    Without the instrument it might be hard for others to help
    troubleshoot the problem.  Was there a
    specific LabVIEW programming question you had, are you having problems with the
    instrument communication, are there errors? 
    I’d like to help, but could you provide some more specific information
    on what problems you are encountering, and maybe accompany that with a simple
    example which demonstrates the behavior? 
    In general we don’t we will be unable to open specific code and debug,
    but I’d be happy to help with specific questions. 
    I did notice, though, that in your logging VI you have at
    least one section of code which appears to not do anything.  It could be that a small section of code, or
    a wire was removed and the data is not being updated correctly (see pic below).  Is your file being opened properly?  Is the data being passed to the file
    properly?  What are some of the things
    you have examined so far?
    Sorry I could not provide the ‘fix’, but I’m confident that
    we can help.  Thanks for posting, and
    have a great day-
    Message Edited by Travis M. on 07-11-2006 08:51 AM
    Travis M
    LabVIEW R&D
    National Instruments
    Attachments:
    untitled.JPG ‏88 KB

  • Xbee data extracting and graphing

    Hi,
    I have been trying for a few days to get this to work with no results. I have searched through google and some of the forums for help with nothing that really helped.
    I am using two xbee devices to transfer data to labview to graph.Eventually I would like to use the ADC on the Xbee to take an analog signal and convert it to a digital signal for labview to read and so I can do my analysis on it. I started with getting a basic Visa program to work and send number from my arduino uno over and over again and placing those values into an array This works great with no problems. The problem comes when I try and send a signal through the Xbee. 
    I tried to send an analog signal out to the adc and I was getting some values in labview in Hex but I was unsure what they really meant. My second attemp was just to try and create a pulse wave using my arduino and send that to labview and have labview try and graph the incoming signal. This however did not work and I am not getting any data. I connected both Xbees up on a bread board without labview to see if I can get the a signal out of the recieving xbee and it worked. So the problem should not be with my xbee's but with my code. 
    The code is where I am not sure about. I was able to find examples of how to use the visa and was given a basic example of it. I have then tried to manipulate the program to see if ti would work for me. I am trying to take the signal from the VISA Read funciton and convert it to a 1d array to display on a graph. I have also tried to seperate the dat into an array char by char from the string. However, none of this worked. I am getting a zero value out for my graph and no values into my array. I am also getting no value into my READ String area. 
    The values I have sometimes gotten look like:
    00\00\00\00\00\94\00\00\ and so on with different value. Some time the byte read is very large up to 475 when I am placing the pulse wave in or so and sometimes smaller at 19 when I tried to connect the ADC.
    I have attached some of my VI's to see if anyone can come up with a better way or point me in a better direction.
    The png is a pic of the results I got from the pulse input. So it looks like I am not getting any data at all. 
    Thanks for the help
    Juan
    Attachments:
    Basic VISA_Test_1.vi ‏26 KB
    Result_OfpulseWave.png ‏129 KB

    Hello jrod03,
    I would suggest using the NI LabVIEW Interface for Arduino Toolkit found here (http://sine.ni.com/nips/cds/view/p/lang/en/nid/209​835) and posting your Arduino related questions in our LabVIEW Interface for Arduino community (https://decibel.ni.com/content/groups/labview-inte​rface-for-arduino).
    This document will also probably be helpful to you (http://digital.ni.com/public.nsf/allkb/8C077471896​06D148625789C005C2DD6).
    Jonathan L.
    Applications Engineer
    National Instruments

  • Data archival and purging for OLTP database

    Hi All,
    Need your suggestion regarding data archival and purging solution for OLTP db.
    currently, we are planning to generate flat files from table..before purging the inactive data and move them to tapes/disks for archiving then purge the data from system. we have many retention requirements and conditions before archival of data. so partition alone is not sufficient.
    Is there any better approach for archival and purging other than this flat file approach..
    thank you.
    regards,
    vara

    user11261773 wrote:
    Hi All,
    Need your suggestion regarding data archival and purging solution for OLTP db.
    currently, we are planning to generate flat files from table..before purging the inactive data and move them to tapes/disks for archiving then purge the data from system. we have many retention requirements and conditions before archival of data. so partition alone is not sufficient.
    Is there any better approach for archival and purging other than this flat file approach..
    thank you.
    regards,
    varaFBDA is the better option option .Check the below link :
    http://www.oracle.com/pls/db111/search?remark=quick_search&word=flashback+data+archive
    Good luck
    --neeraj                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                           

  • How to use the (gigabit) ethernet for data exchange and WLAN for Internet?

    I have following setup:
    - MB Pro SL 10.6
    - Desktop PC Windows 7
    - Wireless Printer
    - Netgear WLAN Router
    At the moment all my data and internet communication is done via the WLAN router on 192.168.1.x That works fine and I can exchange data, print and surf the internet. My problem is that big data exchanges > 10GB take ages via my 54Mbit WLAN Connection. Furthermore the WLAN router is far away from the PC so that it is not possible to plug in the Ethernet Cable of the PC and the MB to the router.
    Now I've thought that there must be a possibility to use a 1 gigabit (cross) cable (like in the 90s ;)) to connect the MB and the PC directly. I know that I will need different IP ranges and so on, but I have no clue how to do that.
    The final solution should be that the setup stays as it is (data exchange, print, internet via WLAN) and that additional it is possible to connect PC and MAC some times for big transfers via a Gigabit ethernet cable. because I only want to connect those two machines I don't think that a switch makes any sense, or?
    Does someone know how to do that?
    Thanks a lot in advance!
    Message was edited by: whitepowder

    Well if it were 2 Macs, I would configure each Mac with a fixed IP address (same subnet as my router, but outside of the routers DHCP assignment range).
    I would give the same fixed IP address to both my Airport AND my ethernet interfaces. I would do the same to the other Mac using another fixed IP address.
    My network service order would have the ethernet higher than the Airport on both systems.
    Normally I would use WiFi, so ethernet would be idle and out of the loop.
    When I wanted faster transfers, I would connect each Mac to the router via their own ethernet cables, and the Mac would automatically detect that the ethernet became available, and would switch to use the faster ethernet. Since Airport and ethernet have the same fixed IP address, I could even do this in the middle of a file transfer and no one would notice as packets were always going to/from the same IP addresses, just using a different route, which doesn't matter.
    NOTE: I've been doing this on Macs since Mac OS X 10.3 days (or was it 10.2; to long ago to remember).
    However, you have thrown a Windows system into the mix, and since I do not use Windows, I do not have a clue about what can be done on Windows. However, maybe this will give you some ideas, and asking the right questions in a Windows oriented forum may provide the Windows side of this setup.
    Sorry, that is the best I can do. Maybe my reply will encourage someone else to offer their approach.

  • Master Data: Transformation and DTP for compounded characteristic

    Good day
    Please assist, I am not sure what is the correct way.
    I extract master data from a db via DB connect.
    There are three fields in the db view for extraction. (1) Code (2) Kind and (3) Code Text.
    What I have done is the following. I created a datasource with transformation and DTP for master data text for (1) and (3), and then a datasource master data attribute transformation and DTP for (1) and (3).
    Is this the correct way to handle extracts of compounded characteristics?
    Your assistance ill be appreciated.
    Thanks
    Cj

    Hello,
    if the char ' Code' is compounded with 'Kind'.
    then for text datasource u shld have  1, 2 and 3. the table for this datasource should have 'code' and 'kind' as keys.
    for the attribute datasource u shld have 1 ,2 followed by the reqd attributes of 'Code'.
    Reagrds,
    Dhanya

  • How to clear data from XY graph for another trial

    I have a XY graph of motor counts versus  intensity. I used shift registers for building Array VI. After terminating the program once using stop terminal. And I cleared the graph. When I start taking measurement second time, same data repeats before the first one. So how can  clear it. Please reply 
    Solved!
    Go to Solution.

    That's one very quick way of doing it : initialize your shift registers.
    What happens in your VI is that after the VI stops (when you clic the stop button) the data that is in the shift registers (your X and Y arrays) stays in memory, so if you run again the VI it will add data into those arrays.
    To initialize your shift registers you need to connect en empty array to them before the while loop. Like this :
    hope this helps
    When my feet touch the ground each morning the devil thinks "bloody hell... He's up again!"

  • Master data maintenance and attributes for GTIN/EAN

    A review of the SAP documentation indicates that  "You can use the consistency report to change the attributes (for example, quantity, length, height, and so on) of certain materials". The "and so on" seems to imply that the attributes can be user defined. If this correct, where is this data kept? Or do the attributes only include the fields from the Basic data 1 Dimensions/EAN data and Plant Data storage General data?
    I would expect we can use LSWM to mass update the GTIN for our existing material numbers. However, for new material master creation is there a function module that would be able to pull the next available GTIN number from the internal number range? Please note that we use LSWM to create all our material numbers. We would like to have the LSWM insert the next available GTIN number.
    Thank you for your response
    Brian

    Hi
    I had tried all sorts - attribute change run and manually activating the master data.  Still doesnt work.
    Anthony

  • Data handling and graphs

    I originally wrote the graph program to handle the data in a text file that was organized like this...
    vertex a
    vertex b
    vertex c
    vertex d
    edge a c
    edge a d
    edge d b
    and now I have to change the main to accept a datafile containing...
    a b
    b c
    c e
    d g
    g c
    Now, here is a copy of the main program as it currently stands...
    import java.io.*;
    import java.util.*;
    public class TopSort
         static Graph theGraph = new Graph();
         static Hashtable hashList = new Hashtable();  //just to store array index
         public static void main (String args[])
                 MyInfoachaffin myInfo = new MyInfoachaffin();
              myInfo.info();     
                 File sourceFile = new File(args[0]); // access the file
                 if (sourceFile.exists() == false)                                          
                   System.err.println("Oops: " + sourceFile + ": No such file");
                   System.exit(1);
                 String newVertex, startEdge, endEdge;
                 int arrayPosition = -1;
                 try //open the file
                           FileReader fr = new FileReader(sourceFile);
                           BufferedReader br = new BufferedReader(fr);
                        String input;
                        while ((input = br.readLine()) != null)
                             StringTokenizer toke = new StringTokenizer(input);
                                while (toke.hasMoreTokens())
                                  if (hashList.containsValue(toke))
                                               return;
                                     else
                                          newVertex = toke.nextToken(); //get vertex
                                          theGraph.addVertex(newVertex); //add into graph
                                          arrayPosition++; //increment counter
                                          hashList.put(newVertex, new Integer(arrayPosition));
                                          //add position with vertex as key
                                     /*else if (toke1.equals("edge"))
                                          startEdge = toke.nextToken(); //get edge
                                          endEdge = toke.nextToken();  //get vertex
                                          Integer temp = ((Integer)hashList.get(startEdge));
                                          int start = temp.intValue(); //find position with key
                                       Integer temp2 = ((Integer)hashList.get(endEdge));
                                          int end = temp2.intValue();  //find position with key
                                          theGraph.addEdge(start, end); //add edge
                                }//close inner while
                       }//close outer while
                       System.out.println("The hashtable contents: " + hashList.entrySet());
                        br.close();
                  }//close try
                      catch (IOException e)
                                System.out.println("Whoops, there's a mistake somewhere.");
                          theGraph.graphSort();
       }//end main
    }//end class I have managed to seperate the vertexes from the edges and stored them in a hashtable so as to be able to remember their location in the array itself. My question is is there a way to go through a file a second time and pull the vertexes from that file or, conversely, should I store the data in the file into, oh I dunno, a linked list or something that I can pull the data from? The graph part of the program works so I didn't add that file and the setting the vertexes works fine too...I'm just stuck on how to handle the edges, exactly, so does anyone have any advice? Right now, the adding edges is commented out because that was how I handled it with the original data...

    Whoa, you're freakin' me out.
    All you gotta do is read in the data in a new format. You can either translate the new format to the old format, or you can write a method that creates objects from the new format explicitly. That's all there is to it.

  • Multiple data sources and cubes for mining

    We have two data sources:
     1 - OLTP database for transactional operations
     2 - Data Warehouse for analysis
    We use change data capture to track changes in the OLTP and upload to DW each night.
    Currently, we have one cube built on top of our DW for analysis and KPI's etc
    However, if we wish to use the OLTP DB for data mining, can this done in the same solution using a new data source or do we need to create a new cube etc?

    Hi Darren,
    According to your description, you are going to use the OLTP DB for data mining, what you want to know is can this done in the same solution using a new data source or do we need to create a new cube?
    In your scenario, if the cube structure for data mining is same as original cube, then you needn't create anything, just edit the connection of the data source to point to the OLTP database. If the cube structure for data mining is not same as original
    cube, then I am afraid you need to create a new cube.
    Regards,
    Charlie Liao
    TechNet Community Support

  • Best Data rate and format for Big Screen Playback

    I am a video producer supplying content for a tradeshow display consisting of 18 flat screens oriented vertically, 8 over 8. Very heavy duty Windows playback machine, with 3 graphics cards, 1 for each set of 6 monitors. The total pixel size is 3240x 1280 px. We have produced a number of full screen and smaller movies that the interactive programmer is trying to intergrate into a Flash program where some videos will play full screen, and others in smaller windows (those were produced in 1920x1080). All videos were mastered at the specified pixel count, square pixels, progressive, and output in Pro Res 422, 44.1K audio. Flash is being used to create the interface, and I am transcoding to .F4V at lower bit rates like 4, 6, and 9 Mbps. They are having trouble playing these movies back smoothly. We have done similar projects in the past with Director without issue. Of course they are blaming the video, I am offering to encode in any format at any data rate they suggest. Can anyone suggest a direction here?

    Are you trying to burn a standard DVD or are you trying to put a QuickTime .mov file on DVD media?
    A standard DVD doesn't worry about file size (only the duration).
    A "data" DVD is limited to the type of media used. A single layer DVD is about 3.7 GB's.
    In order to make a data DVD you need to keep the data rate low enough to not exceed the DVD media playback abilities. They can't handle the higher rates found in many of the preset options.
    You can avoid all of these headaches by telling the viewer to "copy" the .mov file from the DVD to their Desktop. Then nearly any of the presets option will work.
    H.264 is a great video codec and the "automatic" preset should work just fine. Use "multi-pass" for best quality (takes a very long time). Remove the check mark for "Audio" since your file has none. Leave it checked and you waste file size because a silent audio track would be added.

  • Error loading master data attr and text for 0material

    Hi gurus,
    I am new to BI, I am getting an error RSDMD- 194    when i am loading 0MATERIAL_ATTR and TEXT.
    It is showing for nearly 100 records in the error stack. 
    Can anyone explain some basics,
    1) why do we need to add 0MATERIAL to a particular Info area to start loading its ATTR and TEXT
    2) Are the info objects that appear in the attributes tab of 0MATERIAL the same as the fields of 0MATERIAL_ATTR datasource, basically what i don't understand is are we mapping the 0MATERIAL_ATTR datsource fields to the info objects appearing in the attributes tab of 0MATERIAL info object.
    3) Also when i added 0MATERIAL to my Info Area , there were some extra info objects that got in, are there any dependent objects that get added ?
    <removed by moderator>
    Edited by: Siegfried Szameitat on May 19, 2009 11:53 AM

    Hi,
    "Error RSDMD- 194 when i am loading 0MATERIAL_ATTR and TEXT.RSDMD- 194"
    Please check if there is any external characterstic in those 100 records, correct it and load again.
    1) why do we need to add 0MATERIAL to a particular Info area to start loading its ATTR and TEXT
    It does,t matter where the 0Material is. Info area in kind of Folder to easily locate and put the all the related objects at one place.
    2) Are the info objects that appear in the attributes tab of 0MATERIAL the same as the fields of 0MATERIAL_ATTR datasource, basically what i don't understand is are we mapping the 0MATERIAL_ATTR datsource fields to the info objects appearing in the attributes tab of 0MATERIAL info object.
    If it is a business content infosource, it will automatically map the attribute to data source field. If you are modifying or creating your own object, you should map it manually based on field name and description.
    3) Also when i added 0MATERIAL to my Info Area , there were some extra info objects that got in, are there any dependent objects that get added ?
    If you will move any objects all the compounded objects will also come to the info area.
    Regards,
    Kams

  • I w'd like to know if LabVIEW 6.0 Application Builder includes the daqdrv (for data acquisition) and serpdrv (for serial communication) support files by default.

    Building an application to communicate to a device by serial port

    Hi velou
    The LabVIEW 6i Application Builder no longer requires daqdrv. Regardless of your Application Builder and LabVIEW version, you must always install the appropriate driver files on the target machine. For example, if your application communicates with a DAQ board and a GPIB board, then you must install NI-DAQ and NI-488.2 on your target machine. If you are using the VISA VI's for serial communication, than you have to install NI-VISA . If you are using the "old" serial VI's, than you have to include serpdrv separately.
    Luca P.
    Regards,
    Luca

Maybe you are looking for

  • When will the  raw plug in for the nikon D610 in Cs6 on Imac be ready?

    i am owner of a nikon D610 but i can not open my nef in cs6 on imac.is adobe working on a raw plug in for the D610 on Mac and if so,when will it come out?

  • Documents won't print, although there is no problem with Explorer

    Documents that I try to print from a web page go into the printer queue, but won't actually print (eg a flight ticket or a portfolio summary). The document will print perfectly if I use Explorer.

  • Displaying Data Horizontally in ALV

    Dear All   This is my Final Internal Table.     Plant      Pricing Procedure   Material Group       Qty      VA01     31                           CF                     1007      VA01     31                           DT                   358.6      

  • The new forum software

    Folks, Lets take a vote. Do you just love (10) or utterly despise (-10) the new forum software? -8 Against 1. It's kitchy. 2. It's buggy. 3. Source code is HARDER to read than ever. 4. They should have adopted wiki markup. For Ummm. RTF is kinda cute

  • PowerBook Won't start-up: Eternally Spinning Clock at Apple Icon

    Hi. I have a G4 PowerBook (details below). During a business trip last week it started acting funny so I used DiskUtility (running from the start-up internal system hard drive on the PowerBook) to attempt to repair permissions. This repair attempt fr