Benefits of RAW vs. TABLE

I was looking at converting most of my collections to RAW instead of TABLE. Use of RAW appears to make most of my reports and their SQL substantially more accessible, I don't have to do some pretty gnarly multiple joins against MGMT@METRIC_CURRENT tables. But in looking at one of your own plug-ins, I was surprised to see that when you do RAW, you don't have any CONDITIONS specified in the default_collection.
Can metrics collected as "RAW" also contain CONDITIONS? For example, for CpuUtilization here, if SimpleMetrics are collected as RAW, would this threshold still work? Here is the metadata snippet:
<Metric NAME="SimpleMetrics" TYPE="RAW" CONFIG="TRUE">
<Display>
<Label NLSID="xxxxx">Simple Metrics</Label>
</Display>
<TableDescriptor TABLE_NAME="MGMT_SXXXXX0_SIMPLE_METRICS">
<ColumnDescriptor NAME="CpuUtilization" COLUMN_NAME="CpuUtilization" TYPE="NUMBER">
<Display>
<Label NLSID="xxxx_cpu_utilization">CPU Utilization %</Label>
</Display>
</ColumnDescriptor>
And the Default Collection snippet:
<CollectionItem NAME="SimpleMetrics">
<Schedule>
<IntervalSchedule INTERVAL="10" TIME_UNIT="Min"/>
</Schedule>
<Condition COLUMN_NAME="CpuUtilization" CRITICAL="90"
WARNING="80" OPERATOR="GE" OCCURRENCES="3" MESSAGE="The CPU Utilization
Percentage is %value%
and has remained above the warning (%warning_threshold%) or
critical (%critical_threshold%) threshold."
MESSAGE_NLSID="cpu_percentage_used_cond"/>
Next, I noticed that your CMDB information tended to be collected as RAW whereas your detailed statistics were collected in a TABLE. Is there an advantage to collecting static information within a table and highly variable information as RAW or was this simply reporting convenience?
Finally, many of highly variable statistics I want to present as time-span graphs. If I collect a stat is RAW, do the MGMT$METRIC_HOURLY (etc...) tables still get populated?
Thanks for the Help
Paul Monday

The RAW designation is only used for the configuration data metrics. Those are collected as RAWs because it's a requirement of the ECM system. No conditions are set because all of that information is configuration data and it doesn't really make sense to alert on that (it's relatively static). If there's a change, the user finds out through the ECM system and not the alerting system.
And, as you saw, TABLE is used for all of the performance metrics. Using RAW for the metric means that the data will not be stored in the EM metric tables and is instead stored in an independent table. The data won't be rolled up and isn't accessible from the MGMT$METRIC views if it's RAW.

Similar Messages

  • How to find the long/raw datatype tables

    HI ppl,
    I want to find out the long/raw datatype tables in Oracle database.
    Please provide the query..
    Plz help.
    Oracle version :10gr2
    platform:HP-UNIX

    Hi,
    is this is what you are looking for?
    SELECT
         TABLE_NAME,
          COLUMN_NAME,
          OWNER
    FROM
          DBA_TAB_COLUMNS
    WHERE
          DATA_TYPE IN('LONG','RAW')Regards

  • Copy Long Raw from table 1 to another table in Oracle 8i

    I have 2 similar tables in Oracle 8i.
    Table 1:
    Id number;
    blob_obj long raw;
    Table 2:
    Table 1:
    Id number;
    blob_obj long raw;
    Is there any way to copy the data from table to table 2 ?
    Is it possible to do some conversions are load it ?
    Kindly provide pointers for this issue.

    See this thread Re: Getting LONG RAW from remote database for some useful ideas.
    Regards Nigel

  • Benefits and Payment - overview table and drop down are not populated

    Hi All,
    Overview dropdown and table are not getting populated. I have checked the flow of code in Web Dynpro
    1)Vcrem2Comp
    2) Component FcRepFramework
    Tried to check whether these values are coming from ABAP side by using
      wdcomponentapi message manager.
    Please guide
    Regards,
    Ganga

    Below is the code i am using:
        @Override
        public void start(Stage stage) throws InterruptedException {
            // create the scene
            stage.setTitle("Test");
            Pane stackPane = new Pane();
            Browser browser = new Browser();
            stackPane.getChildren().add(browser);
            Rectangle2D primaryScreenBounds = Screen.getPrimary().getVisualBounds();
            webScene = new Scene(stackPane, primaryScreenBounds.getWidth(), (primaryScreenBounds.getHeight()-50), Color.web("#666970"));
            //webScene.getStylesheets().add(this.getClass().getResource("main.css").toExternalForm());
            stage.setScene(webScene);
            stage.show();
    class Browser extends Region {
        final WebView browser = new WebView();
        final WebEngine webEngine = browser.getEngine();
        public Browser() {
            //apply the styles
            getStyleClass().add("browser");
            // load the web page
            String url = "http://localhost:8000/myApp";
            webEngine.load(url);
            //add the web view to the scene
            getChildren().add(browser);
    When i run with normal url, the drop down are coming in bigger font size and bold. But when i run through Javafx, the drop down list items are coming as small size and normal font. I can provide the screen shots but not sure how to paste the images here or attachments. Let me know if any more information is required.

  • No data in Portal Database tables for Activity Report

    Hi experts,
    I've developed an Activity Report application in SAP Portal 7.0, which went live but the report shows no data.
    We are pulling data from 2 portal database tables: WCR_WEBCONTENTSTAT and WCR_USERPAGEUSAGE
    In non production environment there is data in the report but there is no data in the production environment.
    The Activity Report service is already activated/started and set to true.
    What could have caused this? And what should we check now?
    What other configurations/setup should be done?
    Regards,
    Greg

    Hi Greg,
    Those are the aggregated tables. If they are not filled with data although the Portal Activity Report is activated, you should check whether the aggregation finished successfully.
    In the older SPs there were some problems that were fixed in later SPs of 7.0.
    In order to have the latest version of Portal Activity report, you can check SAP note 1084379 - Portal Activity Report - Latest Version (SDA file).
    You can compare the SP via the MANIFAST file, as it contains the version and SP number.
    In order to troubleshoot problems in Activity Report, you can follow SAP note: 1690023 - Portal Activity Report - Component-specific Note
    Some basic checks that you can do:
    Run query on the raw data tables to check since when there is data in those tables:
    select min(timestamphour) from SAP<SystemID>DB.WCR_WEBCNODESTAT;
    If there is too much data from long time ago, then you should delete the old data, and leave only the new data (there is anyways retention time for the data to be kept).
    If there is no data, then it means that the Portal Activity Report does not collect data, and not really activated (usually this is not the case).
    The aggregation runs every top of the hour, so you can check in the default traces for an error during that time.
    In most of the cases something went wrong while aggregating the data.
    As a result the aggregation is not finished, so the transaction is not being committed, and the aggregated tables stay empty.
    If there is a DuplicateKeyException in the trace, you can follow SAP note 1054145 - Duplicate Key Exception.
    If you have any more questions, please don't hesitate to ask.
    I hope this information helps,
    Thanks & regards,
    Michal Zilcha-Lang

  • HOW TO SET A RAW FIELD

    Hi,
    I'd like to know how I can set a RAW field with the correspondent '10' value (for '10' i mean the byte value of '00000101').
    Is there a manner to do this at low level, without cast ?
    Thanks and best regards,
    Neil

    The string 00000101 generally represents the binary value of 5 so I am unsure of exactly what you want but maybe the following will help:
    UT1 > set echo on
    UT1 > col dumped format a30
    UT1 > drop table example;
    Table dropped.
    UT1 > create table example (fld1 varchar2(10), fld2 raw(02));
    Table created.
    UT1 > insert into example values ('Row 1',hextoraw('05'));
    1 row created.
    UT1 > insert into example values ('Row 2',hextoraw('0A'));
    1 row created.
    UT1 > select fld1, fld2, dump(fld2,16) as dumped from example;
    FLD1 FLD2 DUMPED
    Row 1 05 Typ=23 Len=1: 5
    Row 2 0A Typ=23 Len=1: a
    Being that on a big endian machine the 8 bits of a byte are 0 - 7 numbered left to right and on a little endian machine the bits are 0 - 7 right to left this may be where the confusion in your post is coming from. On my system the Oracle functions appear to follow big endian set and display patterns but I suspect that the physical byte settings match. You would need to test this by dumping the raw data on the target platform. The only difference this makes is the character value you would choose to get the bit pattern you want.
    The dbms_raw package provides varchar2 to raw and back conversion functions.
    HTH -- Mark D Powell --

  • How do i convert a RAW photo to JPEG in elements 12??? Thanks for any help!

    How can i convert a RAW photo to JPEG using Elements 12??

    In general, you open and edit the RAW photo in Adobe Camera RAW, and then either
    click on Done and then in the Organizer, FIle->Export->As New File and select JPG
    click on Open and then in the Editor do a File->Save As and select JPG
    You would NOT want to convert an un-edited RAW to a JPG in general, as you would lose all the benefits of RAW (superior image quality) and while retaining all of the disadvantages of RAW (larger file size, extra steps needed, specialized software needed)

  • Jsf data table component + print null cell

    I am using the jsf data table component and binding the column values to a backing bean.
    <h:dataTable binding="#{backing_showDifferences.dataTable2}"
    id="dataTable2">
    <h:column binding="#{backing_showDifferences.userColumn1}"/>
    <h:column binding="#{backing_showDifferences.userColumn2}"/>
    <h:column binding="#{backing_showDifferences.userColumn3}"/>
    </h:dataTable>
    - some code from my showDifferences.java
    HtmlOutputText column1Text = new HtmlOutputText();
    vb =
    FacesContext.getCurrentInstance().getApplication().createValueBinding("#{users.uclass}");
    column1Text.setValueBinding("value", vb);
    usercolumn1.getChildren().add(column1Text);
    HtmlOutputText column2Text = new HtmlOutputText();
    vb =
    FacesContext.getCurrentInstance().getApplication().createValueBinding("#{users.ue1}");
    column2Text.setValueBinding("value", vb);
    usercolumn2.getChildren().add(column2Text);
    HtmlOutputText column3Text = new HtmlOutputText();
    vb =
    FacesContext.getCurrentInstance().getApplication().createValueBinding("#{users.ue2}");
    column3Text.setValueBinding("value", vb);
    usercolumn3.getChildren().add(column3Text);
    ResultSetDataModel dataModel = new ResultSetDataModel();
    dataModel.setWrappedData(rs);
    dataTable2.setValue(dataModel);
    The raw HTML:
    <table id="form1:dataTable2" class="iuptable" border="1">
    <thead>
    <tr>
    <th scope="col">Heading 1</th>
    <th scope="col">Heading 2</th>
    <th scope="col">Heading3</th>
    </tr>
    </thead>
    <tbody>
    <tr>
    <td>some data in this column</td>
    <td>X</td>
    <td></td>
    </tr>
    <tr>
    <td>Some more data in this row</td>
    <td>X</td>
    <td></td>
    </tr>
    </tbody>
    </table>
    My problem is this...in the raw HTML the <td></td> tag is not formatted nicely on my table output. I have lines around my entire table and each cell within it. When the <td></td> prints there are no lines. I am new to the JSF table data component, but if I were writing some JSP or servlet code I would check that if value was null I would append an &nbsp to the <td> tag (ex. <td>&nbsp</td>) and the table would be formatted properly. The backing bean that I am binding to is pulling the data from a database table...so my sql looks like this:
    SELECT uclass, ue1, ue2 from table1; my problem is when ue1 or ue2 is a null value.
    Any ideas would be greatly appreciated!

    Hi,
    the h:dataTable belongs to the JSF Reference Implementation from Sun, not to Oracle ADF Faces. The rendering is within this component set and I suggest to report your issue on one of the SUN forums (http://forum.java.sun.com/forum.jspa?forumID=427) as we have no handle to e.g. fix an issue if it exists in the component set.
    Frank

  • Problems using table (cast as)

    Hi
    I have some code like this:
    declare
    TYPE t_forall_bags IS TABLE OF misbag.bags%ROWTYPE;
    l_forall_bags t_forall_bags := t_forall_bags ();
    begin
    open c2;
    FETCH c2 BULK COLLECT INTO l_forall_bags LIMIT v_array_size;
    if l_forall_bags.COUNT > 0 then
    begin
    merge into misbag.bags dest
    using (select col1,
    col2,
    colx
    from TABLE( cast( l_forall_bags as t_forall_bags ) ) ) src
    on (dest.bag_id = src.bag_id )
    when matched then
    --do update stuff
    when not matched then
    --do insert stuff;
    end;
    end if;
    end;
    on compilation I am getting an ora-00902 invalid datatype seemingly on the t_forall_bags in side the cast (as highlighted in bold)
    I thought I had the syntax correct, but maybe not.
    rgds
    Tony

    BluShadow wrote:Why are you querying data from the database into a collection (in expensive PGA memory) to then pass that back down to the SQL engine to be treated as a table (and incidently one without any indexes or the other benefits of a database table).Well that is a very good question.
    The task is to take a generally smaller number of very recent rows from one table and apply them to a similar table in another schema. This task will run very frequently (ie every second or two) so generally will have a smallish number fo rows (ie 100-200) each time it runs. Some rows are updates and some rows are inserts.
    If there is a delay on running the task, we don't necessarily want to process all of the outstanding rows in one go, but to take them in chunks until is catches up.
    One way to do this would be to perform multiple queries on the original data to check how many rows where outstanding, then to select which ones were to be merged, then go ahead and do the merge (with both main tables as you propose). This alternate idea (that I was looking at here) was to bulk collect the first n rows from the table into the array (up to the defined limit) and then to merge this list of rows into the destination table. The goal was to perform fewer data accesses and make the process least expensive in I/O. By bulk selecting up to N rows into the array, it was felt that there was less I/O on the source table, and probably the same amount of I/O on the destination table.
    The very first method of writing was to bulk select the first N rows into an array, delete any that already existed in the dest table then to "forall" insert the array contents into the destination table. This seemed to work quite well, we wanted to compare the merge version and see how it compared in speed and I/O usage.
    Tony
    rgds
    Tony

  • Accessing logical columns using metadata table

    Hi,
    I have a requirement to store data coming from multiple sources in various format into single table as the number of tables required is not determinable at development phase. Hence I have defined 2 tables,
    1. Metadata table storing the information about logical columns in the incoming data.
    2. Raw generic table where the actual data would be stored.
    While accessible the logical data first I need to access the metadata table to fetch the column details and then access the raw table to provide actual data in those columns. Even though this option is available the code might not remain readable as need to use actual raw table column names in the code.
    I have thought of view based access but again number of views required is not known at development time. Is there any option/feature available which would be able to provide logical view for the raw data?
    thank you in advance.

    Hi
    Firstly, this sounds rather messy and is not something I would consider doing - but I shall give you the benefit of the doubt that all other options have been exhausted.
    A way I can think of is that you could define your metadata a bit like the oracle data dictionary, then you could use something like APEX where you can use dynamic SQL to show a report on the data. Imagine something like this
    DECLARE
    l_sql VARCHAR2(32000);
    BEGIN
    l_sql : = 'SELECT ';
    FOR i IN
    (SELECT column_name
    FROM my_table_data
    WHERE table_name = :TARGET_TABLE)
    LOOP
    IF i = 1
    THEN l_sql := l_sql||i.column_name;
    ELSE l_sql := l_sql||', '||i.column_name;
    END IF;
    END LOOP;
    l_sql := l_sql||' FROM '||:TARGET_TABLE;
    RETURN l_sql;
    END;Obviously you can make stick in predicates etc. if you need to but that would be a starting point?
    Not very pretty though...
    Cheers
    Ben
    http://www.munkyben.wordpress.com
    Don't forget to mark replies helpful or correct ;)

  • Raw files/ photoshop

    I can't open raw files in adobe photoshop elements 10. I have nikon d3200 and not sure what I am doing wrong

    dj_paige wrote:
    As far as I know, the only way for PSE to make use of RAW photos from a Nikon D3200 is to download the FREE Adobe DNG Converter 7.1. This will convert your RAWs to DNG, which is format that PSE can work with, and maintains all the benefits of RAW.
    Here is the link for Windows: http://www.adobe.com/support/downloads/detail.jsp?ftpID=5389
    I leave it up to you to Google the link for Mac.
    Paige
    Thanks for your reply. The convertor works so I can get round the problem, but it isn't so convenient.
    I Still stand by my comment that Adobe should make this clear on the product information, before purchase.

  • Error converting CSV file into internal table

    Hi,
    I have to convert a large CSV file (>20.000 entries) into an internal table. I used FM GUI_UPLOAD to get a raw data table then convert this table using FM TEXT_CONVERT_CSV_TO_SAP.
    But this does not seem to work properly: after 16.000 or so, the FM seems stuck as if in an endless loop.
    Note that if I split the CSV file in several parts, the conversion runs successfully.
    Is there any memory limit with this FM ?
    Thanks,
    Florian

    Florian Labrouche,
    Instead of using two function modules, you can use  'TEXT_CONVERT_XLS_TO_SAP' function module once by specifying file name in that function module itself. It does not take much time.
    Check the sample program.
    report  zvenkat-upload-xl  no standard page heading.
    "Declarations.
    "types
    types:
          begin of t_bank_det,
            pernr(8)  type c,
            bnksa(4)  type c,
            zlsch(1)  type c,
            bkplz(10) type c,
            bkort(25) type c,
            bankn(18) type c,
          end of t_bank_det.
    "work areas
    data:
          w_bank_det type t_bank_det.
    "internal tables
    data:
          i_bank_det type table of t_bank_det.
    " selection-screen
    selection-screen begin of block b1 with frame title text_001.
    parameters p_file type localfile.
    selection-screen end of block b1.
    "At selection-screen on value-request for p_file.
    at selection-screen on value-request for p_file.
      perform f4_help.
      "Start-of-selection.
    start-of-selection.
      perform upload_data.
      "End-of-selection.
    end-of-selection.
      perform display_data.
      "Form  f4_help
    form f4_help .
      data:
            l_file_name like  ibipparms-path  .
      call function 'F4_FILENAME'
        exporting
          program_name  = syst-cprog
          dynpro_number = syst-dynnr
          field_name    = 'P_FILE'
        importing
          file_name     = l_file_name.
      p_file = l_file_name.
    endform.                                                    " f4_help
    "Form  upload_data
    form upload_data .
      type-pools:truxs.
      data:li_tab_raw_data type  truxs_t_text_data.
      data:l_filename      like  rlgrap-filename.
      l_filename = p_file.
      call function 'TEXT_CONVERT_XLS_TO_SAP'
        exporting
          i_tab_raw_data       = li_tab_raw_data
          i_filename           = l_filename
        tables
          i_tab_converted_data = i_bank_det
        exceptions
          conversion_failed    = 1
          others               = 2.
      if sy-subrc <> 0.
        message id sy-msgid type sy-msgty number sy-msgno
                with sy-msgv1 sy-msgv2 sy-msgv3 sy-msgv4.
      endif.
    endform.                    " upload_data
    " Form  display_data
    form display_data .
      data: char100 type char100.
      loop at i_bank_det into w_bank_det .
        if sy-tabix = 1.
          write w_bank_det.
          write / '------------------------------------------------------------'.
        else.
          write / w_bank_det.
        endif.
      endloop.
    endform.                    " display_data
    Regards,
    Venkat.O

  • RAW to JPEG in PSE8

    How do I save a RAW file as a JPEG in PSE8? I can save it as a DNG or as a TIFF, but there's no option for saving the RAWs as a JPEG unless I'm missing something. At the moment I have to use the Canon software to convert my RAW files to JPEGs before I do any manipulation in PSE8. It'd be nice to be able to open a RAW file, tweak exposure etc and then save it as a JPEG all in the same programme. I can do it in Corel PintShop Pro X3, but I prefer PSE8. Is there a way of doing this?

    One thing you could do ... if you want ... is to make your adjustments to the RAW photo and then leave it as RAW. In other words, not convert it JPG at all, saving disk space and avoiding the compression artifacts that happen when you convert to JPG. If so, just click on DONE. (Why do you want these converted to JPG, anyway?)
    If you want to save as JPG, you need to change the photo to 8bits.
    Your current workflow, allowing Canon software to convert the RAW to JPG so you can edit it in ACR, seems pointless to me. You lose the benefits of RAW, and you get the reduced image quality of a JPG, plus extra steps are involved. I think RAW workflows ought to leave photos as RAW until such time as you absolutely must have the photo in another format (like e-mailing or posting to the web).

  • ASM on RAW or OCFS2

    We have a 2-node RAC cluster using ASM that has a couple diskgroups (DATA and FRA) on RAW devices. With our current backup methodology, we use RMAN to backup simultaneously to FRA and /u02/backup (cooked filesystem on node 1 for backups) from where netbackup picks it up and tapes them. The team is a bit concerned with the learning curve involved with RAW and also the maintenance complexities involved in db cloning etc (eg. recently we were asked to clone this RAC database to a non-RAC database on a different host).
    One thought inside the team is to do away with RAW and put ASM on OCFS2 filesystem (in which case we won't have to maintain a separate /u02/backup at all plus no learning curve to manage RAW involved). However we do acknowledge that by doing so, we won't be able to reap the benefits of RAW long-term (when the usage of our RAC instances goes up). Also, I believe Oracle suggest ASM on RAW (could be wrong but that is what I see generally people talking about).
    Any suggestions/advices for or against having ASM created on OCFS2 (or even NFS etc)?
    In case that helps, the servers are Dell PE with RHEL4 and Oracle 10.2.0.3. Our duties are well defined between the storage group, Linux group and DBAs.
    Thank you,
    - Ravi

    Dan,
    There are some things about ASM that make it easier than a FS, but there are others that are more difficult; there is definitely a tradeoff. For the DBA who is coming from a background that is light on hardware, the things that ASM does best are "black box", tasks that a sysadmin or an EMC junkie normally do. The "simple" things a normal DBA would do (copy files, list files, check sizes) are now taken through another layer (whether you go asmcmd or a query against the ASM instance, or RMAN). Kirk McGowan briefly talked about how the job role of the DBA has changed with the new technology:
    http://blogs.oracle.com/kmcgowan/2007/06/27#a12
    Let's look at two "simple" things I have come across so far that I would like to see improved. First is resolving archivelog gaps:
    Easiest way to fill gap sequence in standby archivelog with RAW ASM
    Yes, we all know dataguard can do this. But this is not a thread about dataguard (I am more than willing to talk about it in another thread or privately). With ASM on Raw (from now on, I will just say ASM and assume Raw), you have to use RMAN. I have no problem saying that all of us should become better at RMAN (truly), but it bothers me that I cannot login to my primary host and scp a group of logs from the archive destination to the archived destination on my standby host. Unless of course you put your archive destination on a cooked FS. But then we go back to the beginning of this thread.
    Another "simple" tasks is monitoring space usage. ASM has a gimped version of 'du' that could stand a lot of improvement. Of course, there is sqlplus and just run a nice hierarchy query against one of the v$asm views. But 'du -sk /u0?/oradata/*' is so much simpler than either approach.
    Which leads me to ask myself whether or not we are approaching disk monitoring from a completely wrong angle. What does the 'A' stand for in ASM? grin
    There is a lot that ASM can do. And I have no doubt that, due to my lack of experience with ASM, I am simply "not getting it" in some cases.
    "While it may seem painful in the midst of it, the best way to overcome that learning curve is to diagnose problems in a very hands-on manner." - Kirk McGowan

  • When will Photoshop CC be able to read FujiFilm's new FinePix S1 RAF (RAW) files?

    Fuji's FinePix S1 was only released a couple months ago.  Don't confuse this with the 14-year-old Fuji S1-Pro.
    As of April 24, 2014, the latest RAW conversion table does not include the FinePix S1 which means that I have to use their rather awkward SilkyPix RAW file converter to make TIFF files before opening them in Photoshop or Lightroom.
    Any ideas of when Adobe will recognize this new RAW format?

    This is a user to user Forum, so you are not really addressing Adobe here, even though some Adobe employees thankfully have been dropping by.
    But this is not the Camera Raw Forum.
    Furthermore: Have you contacted Sony as to what their reasons are for not using DNG or working with Adobe to make their new RAW Formats usable in ACR in time?
    (Camera makers’ preferring their proprietary software for RAW conversions may be perfectly legitimate if the results are superior to Adobe’s, but is that always the case?)

Maybe you are looking for