Performance with MAX clause

I have performance issue with this query. the table is huge it has 10M rows. can we achive the same result below by rewriting in some other way?
the column px_date, px_type and px_active are indexed ,, composite index
SELECT MAX(GP.PX_DATE) PX_DATE,
                  GP.PRICING_SECURITY_ID,
                  GP.PX_ISO_CCY
               FROM price_today GP
               WHERE GP.PX_DATE <= '28-SEP-07'
               AND GP.PX_TYPE = 'PS1'
               AND GP.PX_ACTIVE = 'A'
               GROUP BY GP.PRICING_SECURITY_ID, GP.PX_ISO_CCY

the column px_date, px_type and px_active are indexed
,, composite indextry to create another composite index with a different column oder. Check the excecution plan to see if the index is used. If not try to force usage of the index
with a hint.
The column order should include the most restrictive column in the first place.
px_type, px_active, px_date
Also possible could be to create a separte index for each column. But I wouldn't like it. Note that in 11g you can collect combined statistics for all three columns. This is not possible in previous versions and might therefore require an index hint for best performace.

Similar Messages

  • Improving performance with IN clause

    We use lot of those IN clauses for good or bad, and I am trying to improve the performance of those IN clauses.
    I have looked at the documentation several times and can't seem to find a way to bind the values in a 'IN' clause. Is there any thing else that can be done to improve the IN clause performance in OCI?
    Thanks a lot

    Hi,
    You can refer to the following URL on asktom website for detailed explanation about IN & Exists
    http://asktom.oracle.com/pls/ask/f?p=4950:8:3465613697817080707::NO::F4950_P8_DISPLAYID,F4950_P8_B:953229842074,Y
    HTH
    Cheers,
    Giridhar Kodakalla

  • Performance with dates in the where clause

    Performance with dates in the where clause
    CREATE TABLE TEST_DATA
    FNUMBER NUMBER,
    FSTRING VARCHAR2(4000 BYTE),
    FDATE DATE
    create index t_indx on test_data(fdata);
    query 1: select count(*) from TEST_DATA where trunc(fdate) = trunc(sysdate);
    query 2: select count(*) from TEST_DATA where fdate between trunc(sysdate) and trunc(SYSDATE) + .99999;
    query 3: select count(*) from TEST_DATA where fdate between to_date('21-APR-10', 'dd-MON-yy') and to_date('21-APR-10 23:59:59', 'DD-MON-YY hh24:mi:ss');
    My questions:
    1) Why isn't the index t_indx used in Execution plan 1?
    2) From the execution plan, I see that query 2 & 3 is better than query 1. I do not see any difference between execution plan 2 & 3. Which one is better?
    3) I read somewhere - "Always check the Access Predicates and Filter Predicates of Explain Plan carefully to determine which columns are contributing to a Range Scan and which columns are merely filtering the returned rows. Be sceptical if the same clause is shown in both."
    Is that true for Execution plan 2 & 3?
    3) Could some one explain what the filter & access predicate mean here?
    Thanks in advance.
    Execution Plan 1:
    SQL> select count(*) from TEST_DATA where trunc(fdate) = trunc(sysdate);
    COUNT(*)
    283
    Execution Plan
    Plan hash value: 1486387033
    | Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
    | 0 | SELECT STATEMENT | | 1 | 9 | 517 (20)| 00:00:07 |
    | 1 | SORT AGGREGATE | | 1 | 9 | | |
    |* 2 | TABLE ACCESS FULL| TEST_DATA | 341 | 3069 | 517 (20)| 00:00:07 |
    Predicate Information (identified by operation id):
    2 - filter(TRUNC(INTERNAL_FUNCTION("FDATE"))=TRUNC(SYSDATE@!))
    Note
    - dynamic sampling used for this statement
    Statistics
    4 recursive calls
    0 db block gets
    1610 consistent gets
    0 physical reads
    0 redo size
    412 bytes sent via SQL*Net to client
    380 bytes received via SQL*Net from client
    2 SQL*Net roundtrips to/from client
    0 sorts (memory)
    0 sorts (disk)
    1 rows processed
    Execution Plan 2:
    SQL> select count(*) from TEST_DATA where fdate between trunc(sysdate) and trunc(SYSDATE) + .99999;
    COUNT(*)
    283
    Execution Plan
    Plan hash value: 1687886199
    | Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
    | 0 | SELECT STATEMENT | | 1 | 9 | 3 (0)| 00:00:01 |
    | 1 | SORT AGGREGATE | | 1 | 9 | | |
    |* 2 | FILTER | | | | | |
    |* 3 | INDEX RANGE SCAN| T_INDX | 283 | 2547 | 3 (0)| 00:00:01 |
    Predicate Information (identified by operation id):
    2 - filter(TRUNC(SYSDATE@!)<=TRUNC(SYSDATE@!)+.9999884259259259259259
    259259259259259259)
    3 - access("FDATE">=TRUNC(SYSDATE@!) AND
    "FDATE"<=TRUNC(SYSDATE@!)+.999988425925925925925925925925925925925
    9)
    Note
    - dynamic sampling used for this statement
    Statistics
    7 recursive calls
    0 db block gets
    76 consistent gets
    0 physical reads
    0 redo size
    412 bytes sent via SQL*Net to client
    380 bytes received via SQL*Net from client
    2 SQL*Net roundtrips to/from client
    0 sorts (memory)
    0 sorts (disk)
    1 rows
    Execution Plan 3:
    SQL> select count(*) from TEST_DATA where fdate between to_date('21-APR-10', 'dd-MON-yy') and to_dat
    e('21-APR-10 23:59:59', 'DD-MON-YY hh24:mi:ss');
    COUNT(*)
    283
    Execution Plan
    Plan hash value: 1687886199
    | Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
    | 0 | SELECT STATEMENT | | 1 | 9 | 3 (0)| 00:00:01 |
    | 1 | SORT AGGREGATE | | 1 | 9 | | |
    |* 2 | FILTER | | | | | |
    |* 3 | INDEX RANGE SCAN| T_INDX | 283 | 2547 | 3 (0)| 00:00:01 |
    Predicate Information (identified by operation id):
    2 - filter(TO_DATE('21-APR-10','dd-MON-yy')<=TO_DATE('21-APR-10
    23:59:59','DD-MON-YY hh24:mi:ss'))
    3 - access("FDATE">=TO_DATE('21-APR-10','dd-MON-yy') AND
    "FDATE"<=TO_DATE('21-APR-10 23:59:59','DD-MON-YY hh24:mi:ss'))
    Note
    - dynamic sampling used for this statement
    Statistics
    7 recursive calls
    0 db block gets
    76 consistent gets
    0 physical reads
    0 redo size
    412 bytes sent via SQL*Net to client
    380 bytes received via SQL*Net from client
    2 SQL*Net roundtrips to/from client
    0 sorts (memory)
    0 sorts (disk)
    1 rows processed

    Hi,
    user10541890 wrote:
    Performance with dates in the where clause
    CREATE TABLE TEST_DATA
    FNUMBER NUMBER,
    FSTRING VARCHAR2(4000 BYTE),
    FDATE DATE
    create index t_indx on test_data(fdata);Did you mean fdat<b>e</b> (ending in e)?
    Be careful; post the code you're actually running.
    query 1: select count(*) from TEST_DATA where trunc(fdate) = trunc(sysdate);
    query 2: select count(*) from TEST_DATA where fdate between trunc(sysdate) and trunc(SYSDATE) + .99999;
    query 3: select count(*) from TEST_DATA where fdate between to_date('21-APR-10', 'dd-MON-yy') and to_date('21-APR-10 23:59:59', 'DD-MON-YY hh24:mi:ss');
    My questions:
    1) Why isn't the index t_indx used in Execution plan 1?To use an index, the indexed column must stand alone as one of the operands. If you had a function-based index on TRUNC (fdate), then it might be used in Query 1, because the left operand of = is TRUNC (fdate).
    2) From the execution plan, I see that query 2 & 3 is better than query 1. I do not see any difference between execution plan 2 & 3. Which one is better?That depends on what you mean by "better".
    If "better" means faster, you've already shown that one is about as good as the other.
    Queries 2 and 3 are doing different things. Assuming the table stays the same, Query 2 may give different results every day, but the results of Query 3 will never change.
    For clarity, I prefer:
    WHERE     fdate >= TRUNC (SYSDATE)
    AND     fdate <  TRUNC (SYSDATE) + 1(or replace SYSDATE with a TO_DATE expression, depending on the requirements).
    3) I read somewhere - "Always check the Access Predicates and Filter Predicates of Explain Plan carefully to determine which columns are contributing to a Range Scan and which columns are merely filtering the returned rows. Be sceptical if the same clause is shown in both."
    Is that true for Execution plan 2 & 3?
    3) Could some one explain what the filter & access predicate mean here?Sorry, I can't.

  • Bad performance with brandnew system

    hello,
    i set up a completely new win7-64bit based on a fresh osx (snow-leopard update) on a fresh harddisk....and i have very bad 3d performance with only 3 (!!) programs installed. i am working with autodesk 3ds max and autocad and both run very slow. my last system was win7-64bit, too, but running MUCH faster.
    it must be a driver problem, but i dont know how to change the nvidia driver.
    edit:
    after having changed driver from directx 9 to open-gl performance is very fast.
    please help.
    alex
    Message was edited by: mbp_3d

    Hi Bruno
        In the following Re: Generic Sync: Object can not be deserilized / Multiple synchronization call you had mentioned that you are using MI 2.5 SP16 Client.  For your information MI doesnt support Windows Mobile 5 OS on SP16 but only from SP18 does it support this OS.  Have you tried upgrading the client to any of the SPs i had mentioned in the above thread and then checked the performance?
    Best Regards
    Sivakumar

  • Merge with where clause after matched and unmatched

    Hai,
    Can anybody give me one example of merge statement with
    where clause after matched condition and after the unmatched condition.
    MERGE INTO V1 VV1
    USING (SELECT     A.CNO XXCNO, A.SUNITS XXSU, A.DDATE XXDD, XX.SUM_UNITS SUMMED
    FROM V1 A,
    (SELECT                
    SUM(SUNITS) SUM_UNITS FROM V1 B                                   
    GROUP BY CNO) c
    WHERE
    A.DDATE=0 AND A.SUNITS <>0 AND
    A.ROWID=(SELECT MAX(ROWID) FROM V1 )) XX
    ON (1=1)
    WHEN MATCHED THEN UPDATE SET
    VV1.SUNITS=XX.SUMMED
    WHERE XX.XXDD=0
    WHEN NOT MATCHED THEN INSERT
    (VV1.CNO, VV1.SUNITS, VV1.SUNITS)
    VALUES (XX.XXCNO, XX.XXSU, XX.XXDD)
    WHERE XX.XXDD<>0
    i am getting the error
    WHERE XX.XXDD=0
    ERROR at line 13:
    ORA-00905: missing keyword
    Thanks,
    Pal

    One of the example is there:
    http://download-west.oracle.com/docs/cd/B14117_01/server.101/b10759/statements_9016.htm#sthref7014
    What Oracle version do you use ?
    Besides the condition (1=1) is non-deterministic,
    I would expect there an exception like "unable to get a stable set of rows".
    Rgds.

  • Physical queries generated with DISTINCT clause

    Hi All,
    We have a model with three dimension tables and a fact table. All the join conditions between dimension and fact tables are defined and dimension with level keys.
    Apart from this, one of the dimension is used for applying the row level filter.
    Oracle BI is generating physical queries with DISTINCT clause, which is impacting the performance of the query exeuction. Any tips in modification to business model to avoid the generation of DISTINCT clause would be appreciated.
    Thanks,
    Phani

    hi,
    Open RPD-->Double click on Logical table source in BMM layer and go to content tab .you will have Select Distinct Values by default it was uncheck if that is checked uncheck that one....(this one is only to restrict from BMM layer)
    thanks,
    saichand.v

  • Require a query with all clauses

    Hi All,
    I have been asked for a query to write by using all sql clauses like group by, where, having, order by in a single query. I just wrote the below query but I did not get out put. Please do correct where I am missing.
    select deptno, max(sal) from emp group by deptno having  deptno > ( select distinct deptno from emp where deptno >= 20)
    Thanks for your time and info.

    1007912 wrote:
    select deptno, max(sal) from emp group by deptno having  deptno > ( select distinct deptno from emp where deptno >= 20)
    Logically it was wrong; as your sub-query will return more then one record for comparison. By the way if you are querying the same data from same table then why you need a sub query in having clause! No need of it. Even if you are not using any aggregate predicates then use it with where clause.
    This is how your query and output should come out!
    SQL> select
      2   deptno, max(sal)
      3  from emp
      4  where deptno >= 20
      5  group by deptno
      6  order by 1
      7  /
        DEPTNO   MAX(SAL)
            20       3000
            30       2850
    SQL>

  • Oracle XE 10.2.0.1.0 – Performance with BIG full-text indexes

    I would like to use Oracle XE 10.2.0.1.0 only for the full-text searching of the files residing outside the database on the FTP server.
    Recently I have found out that size of the files to be indexed is 5GB.
    As I have read somewhere on this forum before size of the index should be 30-40% of the indexed text files (so with formatted documents like PDF or DOC even less).
    Lets say that the CONTEXT index size over these files will be 1.5-2GB.
    Number of the concurrent user will be max. 5.
    Does anybody have any experience with Oracle XE performance with the CONTEXT index this BIG?
    (Oracle XE license limitations: 1 GB RAM and 1 CPU)
    Regards.
    Edited by: user10543032 on May 18, 2009 11:36 AM
    Edited by: user10543032 on May 18, 2009 12:10 PM

    I have used the 100% same configuration as above, but now for the Oracle Database 11g R1 11.1.0.7.0 – Production instead of Oracle 10g XE.
    The result is that AUTO_FILTER for Oracle 11g is able to parse Czech language characters from the sample PDF file without any problems.
    The problem with Oracle Text 10g R2 may be I guess:
    1. In embedded fonts as mentioned in the Link: [documentation | http://download-west.oracle.com/docs/cd/B12037_01/text.101/b10730/afilsupt.htm] (I tried to embbed all fonts and the whole character set, but it did not helped)
    2. in the character encoding of the text within the PDF documents.
    I would like to add that also other third party PDF2Text converters have similar issues with the Czech characters in the PDF documents – after text extraction Czech national characters were displayed incorrectly.
    If you have any other remarks, ideas or conclusions please reply :-)

  • Oracle 10g  – Performance with BIG CONTEXT indexes

    I would like to use Oracle XE 10.2.0.1.0 only for the full-text searching of the files residing outside the database on the FTP server.
    Recently I have found out that size of the files to be indexed is 5GB.
    As I have read somewhere on this forum before size of the index should be 30-40% of the indexed text files (so with formatted documents like PDF or DOC even less).
    Lets say that the CONTEXT index size over these files will be 1.5-2GB.
    Number of the concurrent user will be max. 5.
    I can not easily test it my self yet.
    Does anybody have any experience with Oracle XE or other Oracle Database edition performance with the CONTEXT index this BIG?
    Will Oracle XE hardware resources license limitation be sufficient to handle one CONTEXT indexe this BIG?
    (Oracle XE license limitations: 1 GB RAM and 1 CPU)
    Regards.

    That depends on at least three things:
    (1) what is the range of words that will appear in the document set (wide range of documents = smaller resultsets = better performance)
    (2) how precise are the user's queries likely to be (more precise = smaller resultsets = better performance)
    (3) how many milliseconds are your users willing to wait for results
    So, unfortunately, you'll probably have to experiment a bit before you'll know...

  • ? Mail [12721] Error 1 performing query: WHERE clause too complex...

    Console keeps showing this about a zillion times in a row, a zillion times a day: "Mail [12721] Error 1 performing query: WHERE clause too complex no more than 100 terms allowed"
    I can't find any search results anywhere online about this.
    Lots of stalls and freezes in mail, finder/os x, and safari -- freqent failures to maintain a broadband connection (multiple times every day).
    All apps are slow, cranky with interminable beach balls getting worse all the time.
    anyone know what the heck is going on?

    Try rebuilding the mailbox to see if that helps.
    Also, how much disk space is available on your boot drive?

  • Improve Indesign performance with a huge number of links?

    Hi all,
    I am working on a poster infographic with a huge number of links, specifically around 4500. I am looking to use indesign over illustrator for the object styles (vs graphic styles in illustrator) and for the interactive capacity.
    The issue I am having is indesign's performance with this many links. My computer is not maxed out on resources when indesign is going full power, but indesign is still very slow.
    So far, here are the things I have tried:
    Display performance to fast
    Switching from link AI files to SVGs
    Turning off preflight
    Turning off live-draw
    Turning off save preview
    Please let me know if you have any suggestions on how to speed up indesign! See below system specs
    Lenovo w520
    8GB DDR3 @1333mhz
    nVidia 2000M, 2GB GDDR
    Intel Core i7 2760QM @ 2.4GHz
    240GB Samsung 840 SSD
    Adobe CS6
    Windows 8
    The only other thing I can think to try is to break up the poster into multiple pages/docs and then combine it later, but this is not ideal. Thank you all for your time.
    Cheers,
    Dylan Halpern

    I am not a systems expert, but I wonder if you were to hide the links, and keep InDesign from accessing them that it might help. Truly just guessing.
    Package the file so all the graphics are in a single folder. Then set the File Handling Preferences so InDesign doesn't yell at you when it can't find the links.
    Then quit InDesign and move the folder to a new place. Then reopen InDesign. The preview of the graphics will suck, but it might help. And one more thing, close the Links panel.

  • Gaming Performance with New MacBook

    Hey I am going to buy the new macbook soon I think and I had a few questions corresponding to gaming performance with the MacBook, in particular running AmericasArmyOperations with bootcamp and windows xp.
    I currently have the last years model 20" iMac 2.4GHz processor with 4GB ram. I run the game smoothly at full graphics and get over 60fps with all settings turned up. I think 60 is the max you can get because it wont move above it, yet never really goes under either.
    Does any1 know the performance abilities of the new MacBooks and the new video cards? Will I need the 2.4GHz MacBook or will the 2.0GHz and 4GB of upgraded ram be sufficient? I don't need to run the game at full details, just enough to get like 20-30 fps on the lowest settings. Do you think the new MacBooks can run it? I know the MBP can do it for sure, just curious if I can opt for a cheaper solution of only 1200$

    i dont know anything about fps but i can tell you i bought the 2.0 processor aluminum macbook and i play the following games in BOOTCAMP:
    -gears of war
    -fifa 09
    -pes 09
    -scarface
    -need for speed undercover
    -call of duty 5
    all of those run at full graphics except cod 5 it runs on medium/low

  • Report  performance with Hierarchies

    Hi
    How to improve query performance with hierarchies. We have to do lot of navigation's in the query and the volume of data size very big.
    Thanks
    P G

    HI,
    chk this:
    https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/library/uuid/1955ba90-0201-0010-d3aa-8b2a4ef6bbb2
    https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/library/uuid/ce7fb368-0601-0010-64ba-fadc985a1f94
    https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/library/uuid/4c0ab590-0201-0010-bd9a-8332d8b4f09c
    Query Performance – Is "Aggregates" the way out for me?
    /people/vikash.agrawal/blog/2006/04/17/query-performance-150-is-aggregates-the-way-out-for-me
    ° the OLAP cache is architected to store query result sets and to give all users access to those result sets.
    If a user executes a query, the result set for that query’s request can be stored in the OLAP cache; if that same query (or a derivative) is then executed by another user, the subsequent query request can be filled by accessing the result set already stored in the OLAP cache.
    In this way, a query request filled from the OLAP cache is significantly faster than queries that receive their result set from database access
    ° The indexes that are created in the fact table for each dimension allow you to easily find and select the data
    see http://help.sap.com/saphelp_nw04/helpdata/en/80/1a6473e07211d2acb80000e829fbfe/content.htm
    ° when you load data into the InfoCube, each request has its own request ID, which is included in the fact table in the packet dimension.
    This (besides giving the possibility to manage/delete single request) increases the volume of data, and reduces performance in reporting, as the system has to aggregate with the request ID every time you execute a query. Using compressing, you can eliminate these disadvantages, and bring data from different requests together into one single request (request ID 0).
    This function is critical, as the compressed data can no longer be deleted from the InfoCube using its request IDs and, logically, you must be absolutely certain that the data loaded into the InfoCube is correct.
    see http://help.sap.com/saphelp_nw04/helpdata/en/ca/aa6437e7a4080ee10000009b38f842/content.htm
    ° by using partitioning you can split up the whole dataset for an InfoCube into several, smaller, physically independent and redundancy-free units. Thanks to this separation, performance is increased when reporting, or also when deleting data from the InfoCube.
    see http://help.sap.com/saphelp_nw04/helpdata/en/33/dc2038aa3bcd23e10000009b38f8cf/content.htm
    Hope it helps!
    tHAK YOU,
    dst

  • What is new with MAX 2.0 and is it compatible with Session Manager?

    We added non-IVI instrument information in, basically the same structure as for IVI instruments,
    into the ivi.ini file to keep all instrument information in the same place. Using MAX Version 1.1 caused no problems whatsoever and the system worked fine. With the advent of MAX 2.0 you seem to use ivi.ini as well as config.mxs to store instrument information. What we have found now is that given a working ivi.ini file from MAX 1.1, we end up with 2 or 3 copies of all the devices in the IVI Instruments->Devices section! When the duplicate entries are deleted and the application exited, the
    ivi.ini file is updated minus the [Hardware->] sections which contain the resource descriptors that our appl
    ications look for. As an added complication, under MAX 2.1 (From an evaluation of the Switch Executive) It behaves the same, except that it almost always crashes with one of the following errors. 'OLEChannelWnd Fatal Error', or 'Error#26 window.cpp line 10028 Labview Version 6.0.2' Once opened and closed MAX 2.1 will not open again! (Note we do not have LabVIEW on the system.) What is the relationship between the config.mxs and ivi.ini now? Also, your Session Manager application (for use with TestStand) extracts information from ivi.ini and may expect entries to be manually entered into ivi.ini (e.g. NISessionManager_Reset = True) i.e. Is the TestStand Session Manager compatible with MAX 2.0?

    Brian,
    The primary difference between MAX 1.1 and 2.x is that there is a new internal architecture. MAX 2.x synchronizes data between the config.mxs and the ivi.ini. The reason you're having trouble is that user-editing of the ivi.ini file is not supported with MAX 2.x.
    Some better solutions to accomplish what you want:
    1. Do as Mark Ireton suggested in his answer
    2. Use the IVI Run-Time Configuration functions. They will allow you to dynamically configure your Logical Names, Virtual Instruments, Instrument Drivers, and Devices. You can then use your own format for storing and retrieving that information, and use the relevant pieces for each execution. You can find information on these functions in the IVI CVI Help file located in Start >> National I
    nstruments >> IVI Driver Toolset folder. Go to the chapter on Run-time Initialization Configuration.
    I strongly suggest #2, because those functions will continue to be supported in the future, while other mechanisms may not be.
    --Bankim
    Bankim Tejani
    National Instruments

  • I have not been able to use iTunes for several months.  Every time I open iTunes, it freezes by computer such that there is about a minutes between each action.  I am running iTunes 11 on Mac OS 10.6.8 and have a computer with maxed out memory.

    I have not been able to use iTunes for several months.  Every time I open iTunes, it freezes by computer such that there is about a minutes between each action.  I am running iTunes 11 on Mac OS 10.6.8 and have a computer with maxed out memory.  Help!  I can't access my iTunes content.

    I have not been able to use iTunes for several months.  Every time I open iTunes, it freezes by computer such that there is about a minutes between each action.  I am running iTunes 11 on Mac OS 10.6.8 and have a computer with maxed out memory.  Help!  I can't access my iTunes content.

Maybe you are looking for

  • Can't install mountain lion on my macbook pro

    Hey everyone, I have a serious issue since my last senior project my MacBook Pro 15" has been too hard to deal with it hangs a lot and I can't install any updates with out formatting my Mac and have a rough time with it lately I wanted to update my m

  • Unable to open PDF portfolio (Error Msg: "For the best experience, open this PDF portfolio in...)

    I am having a problem opening an important PDF portfolio document using Adobe Reader (version 11.0.02) for Mac (Mountain Lion version 10.8.3). When I attempt to open the file, I come up with the error message: "For the best experience, open this PDF

  • Where can I get a free download of saplicense.exe?

    Where can I get a free download of saplicense.exe? saplicense is the SAP tool, that is mostly used for installing a new SAP license. This tool is command line based. In the easiest way, you just issue the following command: saplicense -install Then y

  • Form won't save dates

    Hi all, I am using Adobe Acrobat 6 Standard at work.  We have our time cards on acrobat forms.  My problem is that every time I open up the form and enter the dates, then close it, then re-open it, the dates all revert back to an earlier form.  They

  • Sort table by field position?

    Hello, Is it possible to sort a table by field position?  For example, I have table ITAB with a 4 character field F1.  I want to sort the table first by the 3rd position ( itab-f12(1) ) ascending then the 4th position ( itab-f13(1) ) descending. data