Understanding Global Optimization VI

Hello,
I have a task which requires using global potimization, however, I am not really familiar with it. I've tried looking at the example VI's, however I have a hard time understaniding where my data would go.
My actual task includes measuring mechanical impedance (as a spectrum of complex numbers) and fitting a model of 4 parameters on it with the global optimization finding the best values. (That's how it was described by previous ones to work the best, that's why I'm trying to implement it this way). The outcome of this process would be minimizing the difference between the measured impedance and the model fitted on it. 
Our research group uses a program with a working implementation for this, but we want to implement this model fitting it in Labview.
The impedance data is a complex number, so it has real and imaginary parts as well, with some model parameters being determinde by both real and imaginary part of the data.
I've tried using a "simple" Lev-Mar fit on my data as a first trial by modifying the VIs from this topic (using typecasting to input complex data to the Lev-Mar vi), but it only found values similar to my desired fit (done with the existing program) if I set the estimation very close to the values I get in my program. That's why I wanted to move on to global optimization, to get labview to find these estimation values in some boundaries.
And that's where I am stuck:
I have a complex impedance (so real and imaginary parts for frequency points) and an equation that is supposed to give the output. If I take for example the "Two Circles Optimization" example vi, where should I drop my impedance spectrum for the VI to analyze? As the Lev-Mar had clear inputs for my data (as X and Y) I knew where to wire my data, but with the global optimization I do not know, where to wire it.
I have an impression that I should use a VI for the objective function to handle the complex input as well. 
The objective function for the optimization should be the mean of "[Z(omega)-Zfit(omega)]/Z(omega)" for each frequency points, with Z being the measured and Zfit being the fitted impedance with the parameter estimation. Zfit=A+j*omega*B+(C-j*D)/(omega^[(2/pi)*arctan(C/D)] (j is the imaginary unit)
So where should I put my impedance data in the example vi to be used by the parameter function?
Thank you, hope my explanation is more or less clear
Engage! using LV2012
Solved!
Go to Solution.

Hello,
I had some time to go on with this part of the project.  I figured out finally the way I should use these vis to take my data, calculate the objective function and pass the parameters in the end. However, I am not really familiar with optimization and have a hard time tweaking them to work. 
I have created a simple vi to include a few kinds of optimization and values I receive from and old existing program I want to replace. (It is said to include a kind of global optimization with a random search method, however, I am not sure of this and cannot ask the original programmer.)
However, there are huge differences in the paramters and the adherences of the curves to the original data. What should I do to receive more or less similar data from Labview to my previous program (MyProgram)? (Something like less than 5 percent difference.)
And I have a strange problem as well: if I enter some initial values I receive an error that says: error in Armijo stepsize reduction, and I do not know, what values make this appear and what makes it disappear. So I randomly changed this to some value and finally they disappeared.
Can you have a look at my code, what should I improve? (I know, it is a complete mess, it is just for demonstrating purposes fo seeing the differrences of the optimization vis).
(My measurement is an impedance measurement and I want to get some parameters in the end: A, B, C and D).
Engage! using LV2012
Attachments:
OptTest.llb ‏142 KB

Similar Messages

  • Help understanding Global scope

    I have a formula "Access plus modified"-
    Global recordcount;
    whilereadingrecords;
    recordcount = recordcount+1;
    if {SRMFILE.ACCESSTIME} < {?Last accessed } AND
    {SRMFILE.WRITETIME} < {?Last Modified}
    AND
    {SRMFILESYSTEM.FILESYSTEMDEVICE} IN filesystemdevice
    THEN
    (parentdir := {SRMFILE.PARENTDIR};
    accesstime := {SRMFILE.ACCESSTIME};
    writetime := {SRMFILE.WRITETIME};
    currentfilename := {SRMFILE.FILENAME};
    fileactualsize := {SRMFILE.ACTUALSIZE}/1024/1024;
    thisfilesystemdevice := {SRMFILESYSTEM.FILESYSTEMDEVICE};
    In a following formula i want to display the number of records (recordcount)
    Evaluateafter ({@Access plus Modified});
    recordcount;
    But in formula editor i get an error unless i define recordcount again as
    Global numbervar recordcount;
    and then the count is zero.
    I thought Global forces a variable to be visible throughout the report?

    Carl, the fields on m y report are the ones calculated in the IF logic :-
    parentdir := {SRMFILE.PARENTDIR};
    accesstime := {SRMFILE.ACCESSTIME};
    writetime := {SRMFILE.WRITETIME};
    currentfilename := {SRMFILE.FILENAME};
    fileactualsize := {SRMFILE.ACTUALSIZE}/1024/1024;
    thisfilesystemdevice := {SRMFILESYSTEM.FILESYSTEMDEVICE};
    recordcount := recordcount+1;
    Each variable in the IF logic is further defined by another formula i.e. :-
    EvaluateAfter ({@Access plus Modified});
    StringVar currentfilename;
    currentfilename;
    The formulas for each varioable are then  defined in the details section.
    This is a small sample of the report :-
    FilesystemDevice     Actual size(GB)     Access      ParentDir     Recordcount
    /dev/vx/dsk/dvgy322/db2     0.00     7/15/2006  9:36:30 PM     /db2/pdid322/sqllib/     1.00
    /dev/vx/dsk/dvgy322/db2     0.00     7/15/2006  9:36:30 PM     /db2/pdid322/sqllib/     1.00
    /dev/vx/dsk/dvgy322/db2     0.00     7/15/2006  9:36:30 PM     /db2/pdid322/sqllib/     1.00
    /dev/vx/dsk/dvgy322/db2     0.00     7/15/2006  9:36:30 PM     /db2/pdid322/sqllib/     1.00
    /dev/vx/dsk/dvgy322/db2     0.00     7/15/2006  9:36:30 PM     /db2/pdid322/sqllib/     1.00
    /dev/vx/dsk/dvgy322/db2     0.00     7/15/2006  9:36:30 PM     /db2/pdid322/sqllib/     1.00
    /dev/vx/dsk/dvgy322/db2     0.00     7/15/2006  9:36:30 PM     /db2/pdid322/sqllib/     1.00
    /dev/vx/dsk/dvgy322/db2     0.00     7/15/2006  9:36:30 PM     /db2/pdid322/sqllib/     1.00
    /dev/vx/dsk/dvgy322/db2     0.00     7/19/2006  8:08:08 PM     /db2/pdid322/piris00q/dbbackup/     2.00
    /dev/vx/dsk/dvgy322/db2     0.00     7/15/2006  9:38:48 PM     /db2/pdid322/sqllib/hmonCache/pdid322/     3.00
    /dev/vx/dsk/dvgy322/db2     0.00     7/15/2006  9:38:48 PM     /db2/pdid322/sqllib/hmonCache/pdid322/     3.00
    /dev/vx/dsk/dvgy322/db2     0.00     7/15/2006  9:38:48 PM     /db2/pdid322/sqllib/hmonCache/pdid322/     3.00
    /dev/vx/dsk/dvgy322/db2     0.00     7/15/2006  9:38:48 PM     /db2/pdid322/sqllib/hmonCache/pdid322/     3.00
    /dev/vx/dsk/dvgy322/db2     0.00     7/15/2006  9:38:48 PM     /db2/pdid322/sqllib/hmonCache/pdid322/     3.00
    You will see that "recordcount" stays at 1 for many of the "duplicate" entries. Then moves to 2 for a single record, then 3 for many.
    Thanks John

  • Understanding Global GSM unlock on 4G phones

    Hello all,
    Planning to purchase a Verizon LG G2.
    I'll be purchasing it at full retail price without a contract which is $499.
    I want to ask if I do this & take the phone to my country outside USA insert my local gsm providers sim card it will work or not?
    I have been told 4G LTE Verizon phones with sim card slots & gsm radio are globally GSM unlocked.
    I have asked Verizon support 4 times now via phone, chat, twitter...I've had conflicting answers..
    Once I was told Yes this will work.
    Then I was told I need to activate the phone pay an activation fee of $35 go on a monthly contract only then will I be able to use the phone even if I'm outside USA & using another sim card!
    I am confused now & want to ask you will this phone LG G2 work outside USA completely if I purchase it at retail price from Verizon store without activating or anything then take it back to my country open it in my country & insert my local sim card & start using?

        Sounds like a fantastic upcoming trip, pirata9946. If the device is not going to be used on the Verizon Wireless network during this time period. I would highly suggest placing the device into suspension. We do currently have unbilled suspension in which you may suspend the line for 180 days ( 90 days at a time ) in a 12 month period. You can find more information here; http://vz.to/1ddOJkC
    YosefT_VZW
    Follow us on Twitter @VZWSupport

  • SNP Optimizer Issue

    Hi Experts,
    We are working on SNP Optimization scenario with Transportation, Procurement, Strorage and penalty Cost. Our business process trading industry (we procure and sell).
    We maintained Optmizaion profile with linear and Primal simplex Algorithm. We took a sample scenario where our product has four suppliers with diffrent transportation costs maintained in transportation lanes.
    In SDP94, Optimizer behaving diffrently.
    Case 1: When we run directly at destination location (product + Destination) level optimization running smoothly with out picking any transportation lanes, costs. Subsequently no purchase requistions are created.
    Case 2: When we select all locations (Product + Source and Destination) and run optimization the following errors are coming which we are not able to identify the origin.
    Error 1: Cost function 051MhWG07j6MwgLZXQcXSm not found
    Error 2: Conversion from STD to EA for product WDE-MATL1A not maintained
    Error 3: Error occurred when reading data
    We checked CCR no errors are identified. Also there is no UOM called STD. We couldn't understand why Optimizer is proposing STD to EA conversion error.
    Can you please throw some light.
    Thanks in advance

    Hi Ugameswara Rao
    When you run optimizer,you need to run as a whole including all teh Network...So,the first observation is standard
    Regarding teh second issue,It is a Master data inconsistency,So,please identify the 'concerned location product' and maintain the cost accordingly(Check teh log to find the concerned location product)
    Regarding error "Error occurred when reading data",please raise a seperate thread,as the raesons for this are multiple...but try this first run //om17,model consistency check and correct the data model set...but,the reason for this could also be a softwre error....hence,if the above don't help,please rasise a seperate thread with all details like: STEP in which thi serror occur,and also teh message log details
    Thanks and Regards
    Suresh

  • Rules for Deployment Optimization

    Hi ,
    While defining deployment profile, we have to define Fair Share and PD rules. Just wanted to know what is its' significance in Deployment Optimization. As far as I understand Deployment Optimization is done only on the basis of costs.
    Reply needed urgently.
    Thanks & regards
    Kunal

    user1073781 wrote:
    What about the following rules for good SQL usage and query optimization imposed by tha DBA of my company?
    My, where to begin? I assume this is for the World's Most Hopeless DBA competition?
    >
    1) Don't use referential integrity but implement it applicationally. This for easy tables administration
    Because it is real easy to deal with logically corrupted data I guess.
    >
    2) Don't specify schema name in the queries, but only use table name. The table name must be unique on the entire Oracle instance.
    It doesn't matter if it is unique you won't find it without the schema name, unless public synonyms are used and these should be avoided.
    >
    3) Don't use select *.
    Sometimes valid, sometimes not.
    >
    4) Don't use ROWNUM in the where condition
    Which will exclude being able to execute top-n or pagination queries, unless the intent is to return all, possibly billions of rows and throw away all except ten for example.
    >
    5) Don't create too much indexes because they make worse performance of the insert,update and delete operations
    Define too much.
    >
    In my humble opinion the 1) is completely wrong: why use a relational dmbs without relationships between tables?
    Agreed.
    >
    What do you think?
    Most of the others make no sense either.

  • Java.lang.NullPointerException in adobe.abc.GlobalOptimizer

    I'm access the GlobalOptimizer directly and passing an ABC dump to it. It works just fine if I pass lets say builtin.abc or something from Tamarin but if I pass in an ABC dump from a SWF compiled in Flash CS5 I get an error. Any ideas? Mainly looking for input from an Adobe employee but if anyone has any knowledge on the situation I'd greatly appreciate it. Thanks!

    To be honest I don't really know a ton about it. My interest in it were sparked when I found a briefing paper on LLVM.org website about how adobe planned to use GlobalOptimizer for the alchemy compiler. From what I understand, the global optimizer outputted an intermediary representation of the actionscript bytecode passed to it for optimization, and adobe planned on simply converting that output to be code that the LLVM compiler could understand and then perform its optimization techniques on that code in static compilation. This is a hack solution but I believe it was the initial solution used to create the alchemy toolkit. Since then adobe seems to have perfected this process and using it for the packager for iphone as well as (althought not 100% sure) android. If you download and decompile the PFI.jar for (packager for iphone) you'll see there are a ton of new classes added to the actionscript compiler all related to LLVM code generation. Note that in the latest packager for iphone, these classes have been merged into the adobe AIR jar file (I believe it's adt.jar). So what it appears adobe has done is actually written java code to interface with the LLVM for converting the ABC output into LLVM IR. LLVM provides interfaces to do this, and this is the ideal solution since it would make updating the whole toolchain to work with newer released of LLVM much, much easier. This is also why I assume that adobe has stopped updating the alchemy project and I think perhaps the only reason why it's still around is to satisfy the licenses of the LLVM and other various open source apple code they've integrated with. Basically I don't think we're ever going to see an update to the alchemy project again, since now this technology is at the core of the sales for flash (that is, the ability to compile mobile native applications). If any of this was made open source ( the new LLVM class additions to the actionscript compiler) or the existing alchemy project was updated, it would make it far too easy for third party tools to be created for publishing flash native executables to a wide variety of devices, and thus be directly counteractive to the core of adobe's business plan for the flash platform. So yeah, after finding all of this out I pretty much abandoned my research on the GlobalOptimizer and the actionscript compiler. I can tell you this about the GlobalOptimizer, that it needs to be updated, along with classes it uses, to understand new classes available in the latest version of the actionscript 3 language. For example if you were to grab the OPEN source of the actionscript compiler, and run global optimizer on an ABC file that target flash 10+ and used classes like the Vector class, it would crash (or at least it did when I tried it). That is because the open source versions of this stuff (asc, etc) is purposely left trailing behind what is current in the flash world and legally so, since its all licensed mostly under the mozilla public license. I did start updating the source to do this but honestly there's so many more core types being added and ones that have been added it's not worth the time. Anyway that's all the NFO I've got, hope it helps.

  • ORACLE ENTERPRISE EDITION VERSION 별 주요 특징

    제품 : ORACLE SERVER
    작성날짜 : 2004-08-13
    오라클 서버의 version 별 주요 특징은 다음과 같습니다
    Oracle 7.1
    ===========
    - 병렬 SQL 오퍼레이션 제공:
         . Parallel Scan, Parallel Join, Parallel Sub-query 등
         . Parallel loading
         . Parallel Index Creation
    - Synchronous Distributed Database
         . Global Optimization for Distributed Query/Update
    - Asynchronous
         . Deferred Remote Procedure Call
         . Updatable Snapshot
    - Advanced Replication
         . Asymmetric Replication
    Oracle 7.2
    ===========
    - No logging option (CREATE INDEX)
         . 예) create index 시 norecovery option
    - Parallel Insert
    - Star Query Optimization
         . Data Warehousing 에서 성능향상을 위해 사용
    - Union All view
    - Resizable data files
    - Cursor Variable Data-type
    Oracle 7.3
    ===========
    - Partition view 제공
         . Parallel UNION,UNION ALL
    - Hash Join
         . Hash table 을 사용하여 Join processing 에 효과적임
    - Asynchronous Read-ahead
    - Parallel Load Balancing in OPS
    - Dynamic system configuration
         . Oracle 초기화 parameter 중 일부를 db shutdown 없이 변경할 수
         있음
    - Updatable view
         . Primary Key 를 유지하는 view 에 대해서 update 가 가능함
    - Oracle Enterprise manager
         . client 에서 Oracle server monitoring 이 가능하게 해 주는 Tool
    Oracle 8
    =========
    - Data Warehousing 을 위한 기능 개선을 및 추가
         . 테이블과 인덱스 분할(partitioning)
         . 64 비트 화일 시스템을 지원(Oracle7 에서는 옵션)
    - OLTP 를 위한 기능 개선 및 추가
         . Connection Pooling 을 이용한 대량의 사용자 지원
         . Oracle Parallel Server(OPS)와 Distributed Lock Manager 의 통합
         . Advanced Queuing Mechanism 지원
         . System Change Number 생성의 최적화 (10-15 % 의 OPS 성능향상)
         . Global Fixed View 를 제공하여 OPS 에서 여러 노드의 instance
         관리를 가능
         . Transparent Application Fail Over in OPS
         . TP monitor 지원 개선
         - Dynamic Registration
         - Loosely-Coupled Transaction
         - No need to session cache
         - 모든 플랫폼에서 OPS 용 XA library 제공
         . Multi-level, Incremental Backup in Recovery manager
    - Replication
         . Parallel Propagation
         . Replication Manager Wizard
         . Primary key snapshot
    - Object-Relational Database
         . application 중심으로 서버를 확장하기 위한 객체 타입(Object type)          지원
         . 객체 메쏘드로서 외부 프로시저 (external procedure) 사용
         . 빠른 클라이언트-사이드 객체 검색 지원
         . Multi-Data(이미지, 비디오, 텍스트 등) 지원
         . JDBC/JSQL 을 통한 자바 지원
         . 객체형 데이타에 대한 Object view 제공

    Hi;
    1. Patch can be download only from metalink
    2. All related patch number can be found:
    NOTE:430449.1 - How to find DB Patches for the Microsoft Windows platforms My Oracle Support
    11.2.0.x Oracle Database and Networking Patches for Microsoft Platforms [ID 1114533.1]
    PS:Please dont forget to change thread status to answered if it possible when u belive your thread has been answered, it pretend to lose time of other forums user while they are searching open question which is not answered,thanks for understanding
    Regard
    Helios

  • Index issue with or and between when we set one partition index to unusable

    Need to understand why optimizer unable to use index in case of "OR" whenn we set one partition index to unusable, the same query with between uses index.
    “OR” condition fetch less data comparing to “BETWEEN” still oracle optimizer unable to use indexes in case of “OR”
    1. Created local index on partitioned table
    2. ndex partition t_dec_2009 set to unusable
    -- Partitioned local Index behavior with “OR” and with “BETWEEN”
    SQL> CREATE TABLE t (
      2    id NUMBER NOT NULL,
      3    d DATE NOT NULL,
      4    n NUMBER NOT NULL,
      5    pad VARCHAR2(4000) NOT NULL
      6  )
      7  PARTITION BY RANGE (d) (
      8    PARTITION t_jan_2009 VALUES LESS THAN (to_date('2009-02-01','yyyy-mm-dd')),
      9    PARTITION t_feb_2009 VALUES LESS THAN (to_date('2009-03-01','yyyy-mm-dd')),
    10    PARTITION t_mar_2009 VALUES LESS THAN (to_date('2009-04-01','yyyy-mm-dd')),
    11    PARTITION t_apr_2009 VALUES LESS THAN (to_date('2009-05-01','yyyy-mm-dd')),
    12    PARTITION t_may_2009 VALUES LESS THAN (to_date('2009-06-01','yyyy-mm-dd')),
    13    PARTITION t_jun_2009 VALUES LESS THAN (to_date('2009-07-01','yyyy-mm-dd')),
    14    PARTITION t_jul_2009 VALUES LESS THAN (to_date('2009-08-01','yyyy-mm-dd')),
    15    PARTITION t_aug_2009 VALUES LESS THAN (to_date('2009-09-01','yyyy-mm-dd')),
    16    PARTITION t_sep_2009 VALUES LESS THAN (to_date('2009-10-01','yyyy-mm-dd')),
    17    PARTITION t_oct_2009 VALUES LESS THAN (to_date('2009-11-01','yyyy-mm-dd')),
    18    PARTITION t_nov_2009 VALUES LESS THAN (to_date('2009-12-01','yyyy-mm-dd')),
    19    PARTITION t_dec_2009 VALUES LESS THAN (to_date('2010-01-01','yyyy-mm-dd'))
    20  );
    SQL> INSERT INTO t
      2  SELECT rownum, to_date('2009-01-01','yyyy-mm-dd')+rownum/274, mod(rownum,11), rpad('*',100,'*')
      3  FROM dual
      4  CONNECT BY level <= 100000;
    SQL> CREATE INDEX i ON t (d) LOCAL;
    SQL> execute dbms_stats.gather_table_stats(user,'T')
    -- Mark partition t_dec_2009 to unusable:
    SQL> ALTER INDEX i MODIFY PARTITION t_dec_2009 UNUSABLE;
    --- Let’s check whether the usable index partition can be used to apply a restriction: BETWEEN
    SQL> SELECT count(d)
        FROM t
        WHERE d BETWEEN to_date('2009-01-01 23:00:00','yyyy-mm-dd hh24:mi:ss')
                    AND to_date('2009-02-02 01:00:00','yyyy-mm-dd hh24:mi:ss');
    SQL> SELECT * FROM table(dbms_xplan.display_cursor(format=>'basic +partition'));
    | Id  | Operation               | Name | Pstart| Pstop |
    |   0 | SELECT STATEMENT        |      |       |       |
    |   1 |  SORT AGGREGATE         |      |       |       |
    |   2 |   PARTITION RANGE SINGLE|      |    12 |    12 |
    |   3 |    INDEX RANGE SCAN     | I    |    12 |    12 |
    --- Let’s check whether the usable index partition can be used to apply a restriction: OR
    SQL> SELECT count(d)
        FROM t
        WHERE
        (d >= to_date('2009-01-01 23:00:00','yyyy-mm-dd hh24:mi:ss') and d <= to_date('2009-01-01 23:59:59','yyyy-mm-dd hh24:mi:ss'))
        or
        (d >= to_date('2009-02-02 01:00:00','yyyy-mm-dd hh24:mi:ss') and d <= to_date('2009-02-02 02:00:00','yyyy-mm-dd hh24:mi:ss'))
    SQL> SELECT * FROM table(dbms_xplan.display_cursor(format=>'basic +partition'));
    | Id  | Operation           | Name | Pstart| Pstop |
    |   0 | SELECT STATEMENT    |      |       |       |
    |   1 |  SORT AGGREGATE     |      |       |       |
    |   2 |   PARTITION RANGE OR|      |KEY(OR)|KEY(OR)|
    |   3 |    TABLE ACCESS FULL| T    |KEY(OR)|KEY(OR)|
    ----------------------------------------------------“OR” condition fetch less data comparing to “BETWEEN” still oracle optimizer unable to use indexes in case of “OR”
    Regards,
    Sachin B.

    Hi,
    What is your database version????
    I ran the same test and optimizer was able to pick the index for both the queries.
    SQL> select * from v$version;
    BANNER
    Oracle Database 10g Enterprise Edition Release 10.2.0.4.0 - Prod
    PL/SQL Release 10.2.0.4.0 - Production
    CORE    10.2.0.4.0      Production
    TNS for 32-bit Windows: Version 10.2.0.4.0 - Production
    NLSRTL Version 10.2.0.4.0 - Production
    SQL>
    SQL> set autotrace traceonly exp
    SQL>
    SQL>
    SQL>  SELECT count(d)
      2  FROM t
      3  WHERE d BETWEEN to_date('2009-01-01 23:00:00','yyyy-mm-dd hh24:mi:ss')
      4              AND to_date('2009-02-02 01:00:00','yyyy-mm-dd hh24:mi:ss');
    Execution Plan
    Plan hash value: 2381380216
    | Id  | Operation                 | Name | Rows  | Bytes | Cost (%CPU)| Time     | Pstart| Pstop |
    |   0 | SELECT STATEMENT          |      |     1 |     8 |    25   (0)| 00:00:01 |       |       |
    |   1 |  SORT AGGREGATE           |      |     1 |     8 |            |          |       |       |
    |   2 |   PARTITION RANGE ITERATOR|      |  8520 | 68160 |    25   (0)| 00:00:01 |     1 |     2 |
    |*  3 |    INDEX RANGE SCAN       | I    |  8520 | 68160 |    25   (0)| 00:00:01 |     1 |     2 |
    Predicate Information (identified by operation id):
       3 - access("D">=TO_DATE(' 2009-01-01 23:00:00', 'syyyy-mm-dd hh24:mi:ss') AND
                  "D"<=TO_DATE(' 2009-02-02 01:00:00', 'syyyy-mm-dd hh24:mi:ss'))
    SQL>  SELECT count(d)
      2  FROM t
      3  WHERE
      4  (
      5  (d >= to_date('2009-01-01 23:00:00','yyyy-mm-dd hh24:mi:ss') and d <= to_date('2009-01-01 23:59:59','yyyy-mm-dd hh24:mi:ss'
      6  or
      7  (d >= to_date('2009-02-02 01:00:00','yyyy-mm-dd hh24:mi:ss') and d <= to_date('2009-02-02 02:00:00','yyyy-mm-dd hh24:mi:ss'
      8  );
    Execution Plan
    Plan hash value: 3795917108
    | Id  | Operation                | Name | Rows  | Bytes | Cost (%CPU)| Time     | Pstart| Pstop |
    |   0 | SELECT STATEMENT         |      |     1 |     8 |     4   (0)| 00:00:01 |       |       |
    |   1 |  SORT AGGREGATE          |      |     1 |     8 |            |          |       |       |
    |   2 |   CONCATENATION          |      |       |       |            |          |       |       |
    |   3 |    PARTITION RANGE SINGLE|      |    13 |   104 |     2   (0)| 00:00:01 |     2 |     2 |
    |*  4 |     INDEX RANGE SCAN     | I    |    13 |   104 |     2   (0)| 00:00:01 |     2 |     2 |
    |   5 |    PARTITION RANGE SINGLE|      |    13 |   104 |     2   (0)| 00:00:01 |     1 |     1 |
    |*  6 |     INDEX RANGE SCAN     | I    |    13 |   104 |     2   (0)| 00:00:01 |     1 |     1 |
    Predicate Information (identified by operation id):
       4 - access("D">=TO_DATE(' 2009-02-02 01:00:00', 'syyyy-mm-dd hh24:mi:ss') AND
                  "D"<=TO_DATE(' 2009-02-02 02:00:00', 'syyyy-mm-dd hh24:mi:ss'))
       6 - access("D">=TO_DATE(' 2009-01-01 23:00:00', 'syyyy-mm-dd hh24:mi:ss') AND
                  "D"<=TO_DATE(' 2009-01-01 23:59:59', 'syyyy-mm-dd hh24:mi:ss'))
           filter(LNNVL("D"<=TO_DATE(' 2009-02-02 02:00:00', 'syyyy-mm-dd hh24:mi:ss')) OR
                  LNNVL("D">=TO_DATE(' 2009-02-02 01:00:00', 'syyyy-mm-dd hh24:mi:ss')))
    SQL> set autotrace off
    SQL>Asif Momen
    http://momendba.blogspot.com

  • JTA and XA, Mqueue

    Trying to get XA to work with Oracle and
    Ibm mqueue series.
    I'm not having much luck. Lot's of problems
    and lack of info?
    We want Mqueue to be the TM and oracle to
    be the resource? There seems to be no docs
    on how to make oracle a XA resource and
    understand global transactions?
    Any help would be great.
    thanks
    mark =.=

    Trying to get XA to work with Oracle and
    Ibm mqueue series.
    I'm not having much luck. Lot's of problems
    and lack of info?
    We want Mqueue to be the TM and oracle to
    be the resource? There seems to be no docs
    on how to make oracle a XA resource and
    understand global transactions?
    Any help would be great.
    thanks
    mark =.=

  • Creating a new database vs. Using the existing database

    Hi,
    I have to build a new desktop application over an existing oracle database -
    Current database is around 300 gb
    Has 1500+ tables
    Tables are in denormalized form and has ungrouped table columns
    High level of nesting is done in the triggers.
    Now before I take decision to use existing database I want to know
    1. what factors to consider while choosing between the existing db or a new db?
    Thanks in advance.

    Some of the things I would consider before including the two desktop applications together:
    Would you mind impacting one or the other application (downtime) if you needed to bounce the database for any reason, or if there were issues with the database. Keeping them separate would prevent having to bring down both applications if the database went down for any reason (planned or otherwise).
    Would the type of activity in both databases be so similar that all global optimizer and other parameter settings be perfect for both cases?
    Would the SLA be the same for both databases? Would your customers agree to impacts of one application data if you had to roll back the database for the other application?
    By having the two applications in the same database, you are also stuck with having to wait for both application vendors to certify their product with the next patch level or version for upgrades. Keeping them separate eliminates this problem.
    Is there any compelling reason to keep them together?
    Personally, I would keep them separate for all of the above reasons, and many more which I did not bother to write.

  • Where is the best place to put custom functions?

    Hi,
    I have a composition which has a number of symbols. I have to call some custom methods externally and was wondering where is the best place to put the custom methods?
    I have seen posts that I should put the code in the CompositionReady event of the stage but I would like to put it a bit closer to the symbol.
    Is this the best place?
    Sham.

    Here is a case:
    It's a good idea if you well understand global and local variables.
    About complete event, you are right.

  • Pass IR Filters to Interactive Report

    I am calling my Interactive Reports from a menu with IR filters ('SCHOOL' and 'YEAR') being passed to the report. I'm wondering if the interactive report queries all the data first and then applies the IR filters (which would be a performance problem). I ran the report in debug, but I'm not really sure how it evaluates the SQL. Does anybody know how passing IR filters to report work in the execution of the SQL?
    select
    null as apxws_row_pk,
    "ATTENDANCE_CODE",
    "ATTENDANCE_CODE_DESC",
    "PK_ID",
    count(*) over () as apxws_row_cnt
    from (
    select * from (
    select A.PK_ID as "PK_ID",
    A.YEAR as "FK_YEAR",
    A.FK_SCHOOL as "FK_SCHOOL",
    A.ATTENDANCE_CODE as "ATTENDANCE_CODE",
    A.ATTENDANCE_CODE_DESC as "ATTENDANCE_CODE_DESC",
    C.SCHOOL_YEAR as "YEAR"
    from "#OWNER#"."SCHOOL_YEAR" C,
    "#OWNER#"."SCH_BASE" B,
    "#OWNER#"."ATTENDANCE_CODE" A
    where B.PK_ID = A.FK_SCHOOL
    and C.PK_ID = A.YEAR
    ) r
    where ("SCHOOL" = :APXWS_EXPR_1
    and "YEAR" = to_number(:APXWS_EXPR_2))
    ) r where rownum <= to_number(:APXWS_MAX_ROW_CNT)
    order by "ATTENDANCE_CODE"

    Stew
    The information was not specific to rewrite, I was merely trying to show how tracing could be enabled to determine what the optimizer is doing (which was appropriate to the rewrite question). It will also show you the query actually being performed and the execution plan.
    This information would allow you to see the query being passed to oracle by Apex and what predicates are in the query passed, which would go some way to answering the original question.
    To your question a'n oracle SQL query is an oracle SQL query' - I'm not quite sure what you mean. Two differently constructed queries over the same tables, returning the same results can differ wildly in performance. Understanding the optimizer and the decisions it makes will change the way in which you write queries.
    I've generally found out that the best way to determine what happens behind the scenes is to have a look. Getting information from the experts is fantastic, but working it out yourself or even just testing to confirm the answers you are given helps understanding even more.
    Ben
    Looking at the rownum clause, it will speed up the query if the ORDER BY clause is removed. If you are ordering the results, the entire result set must be returned first in order to get the first x rows. Without ordering, this is required.
    Edited by: Munky on Jan 16, 2009 3:51 PM

  • [IDES] ECC6 - swcat / swdc + HR

    Hi,
    I am trying to download IDES for the latest version of ECC, and hope this is the forum to ask this kind of question.
    I am SAP Partner but not Global.
    <i> -> it seems to be relevant: as per my understanding, Global Partner can get CD/DVD through the Software Catalog, Partner can simply download items from the Distribution Center</i>
    I checked in http://service.sap.com/swcat (<b>Software Catalog</b>) where the ECC 6.0 is available for shipment as CD/DVD
    I also checked in http://service.sap.com/swdc (<b>Distribution Center</b>) but I couldn't find the ECC 6.0 version as a downloadable ZIP file.
    <b>Questions</b>:
    1) Is the ECC 6.0 available somewhere in http://service.sap.com/swdc ?
    2) Besides, I would like to know if HR was part of IDES or not ?
    Thanks in advance for your answers.
    Best regards,
    Guillaume

    Hi,
    I at http://service.sap.com/swdc , you should select
    <b>Search for all Categories</b> ... enter search string IDES, however i don't think Latest ECC IDES would be available for download.
    Regards,
    Siddhesh

  • WHERE LIKE% and ASP Performance Issue

    Hi,
    i am facing an issue with my ASP application as i use it as front end web application to connect to a huge oracle Database.
    Basically i use my queries within the ASP pages, one of them uses Where LIKE to more than one column
    Example: i have Col1, Col2 i have created the following indexes:
    Index1 on Col1, Index2 on Col2 and Index3 on Col1,Col2
    From the ASP page i have field 1, field 2 and would like to use LIKE on both fields (Field1,Field2) but the process take so much time to get result not to mention the resources it takes.
    My ASP Query:
    sqlstr = "Select * From TABLE Where COL1 Like '"&field1&"%' And COL2 Like '"&field2&"%' ORDER BY Num ASC"
    Set Rs = Conn.Execute(Sqlstr)
    What to use instead of this query to get same result but much faster (optimized)?
    Thanks.

    if the ratio of the data returning is appropriate for index access Oracle optimizer should choose to use it, but for further commenting;
    a. I couldn't see your query in the output you provided?
    b. I need to know the data distribution; what is the ratio of the data coming over all table's data with the literals you use? you can check it by taking a count of the columns you indexed with a group by query.
    c. I assume that your indexes are in VALID status and you collected statistics with dbms_stats and cascaded to the indexes, and depending on the question above if your data is not skewed which may cause extra need for histograms,
    d. I also assume if like is starting with '%', which in this case Oracle does not use indexes and Text option is what you need to read as advised, or for another smart idea on making “like ‘%xxxx’" use index in Oracle you may check - http://oracle-unix.blogspot.com/2007/07/performance-tuning-how-to-make-like.html
    After you supply the query with literals included and the data distribution, maybe as a last resort we need to force index access with a hint and compare the statistics provided by timing and autotrace options of SQL*Plus.
    ps: Also you may produce a 10053 event trace to understand the optimizer decision - http://tonguc.wordpress.com/2007/01/20/optimizer-debug-trace-event-10053-trace-file/

  • Local NFS / LDAP on cluster nodes

    Hi,
    I have a 2-node cluster (3.2 1/09) on Solaris 10 U8, providing NFS (/home) and LDAP for clients. I would like to configure LDAP and NFS clients on each cluster node, so they share user information with the rest of the machines.
    I assume the right way to do this is to configure the cluster nodes the same as other clients, using the HA Logical Hostnames for the LDAP and NFS server; this way, there's always a working LDAP and NFS server for each node. However, what happens if both nodes reboot at once (for example, power failure)? As the first node boots, there is no working LDAP or NFS server, because it hasn't been started yet. Will this cause the boot to fail and require manual intervention, or will the cluster boot without NFS and LDAP clients enabled, allowing me to fix it later?

    Thanks. In that case, is it safe to configure the NFS-exported filesystem as a global mount, and symlink e.g. "/home" -> "/global/home", so home directories are accessible via the normal path on both nodes? (I understand global filesystems have worse performance, but this would just be for administrators logging in with their LDAP accounts.)
    For LDAP, my concern is that if svc:/network/ldap/client:default fails during startup (because no LDAP server is running yet), it might prevent the cluster services from starting, even though all names required by cluster are available from /etc.

Maybe you are looking for

  • ISE Guest webauth error

    Using central web auth 802.1x on a 3560 to ISE.  I get to the web portal fine and was able to login with the guest account and change the password.  Now when I get redirected to the portal everytime I login I get "Your session has expired.  Please lo

  • Retriving the data

    Hi Can we retrive  the data from multibple cubes to list cube Thanks

  • SAP BW on ECC 6.0

    Hi, Is that possible to activate the BW in a SAP ECC 6.0 with all SAP_BW Support Packages? I kwon that i have to user transaction RSA1 in client 001 but my problem is how to use this transaction. Does any one have a document that i can follow. Thanks

  • ASP Include File issue

    I have been trying to use a form as an include file. I created the file as a separate asp page, and then tried to add it as an include, the page fails when I try in on my testing server. If I create the form within the page I get no problems!! Also w

  • Trouble deleting image from flash Cisco SF 300-24

    backuplo                rw       851760      26      30-Aug-2011 10:47:28   directry.prv            --       65520       --      30-Aug-2011 10:46:37   image-1                 rw      7274496    7274496   30-Aug-2011 10:46:37   image-2