TimesTen Queries not returning

We are evaluating TimesTen IMDB Cache as an option to improve our performance of queries that do aggregate operations across millions of rows.
My environment is a Windows Server 2003 R2 X64 with Intel Xeon X5660 16 CPU cores, and 50G of RAM. The database is Oracle Enterprise 11.2.0.2.
I have the following DSN parameters.
DataStore Path + Name : H:\ttdata\database\my_ttdb
Transaction Log Directory : H:\ttdata\logs
Database Character Set: WE8MSWIN1252
First Connection:
Permanent Data Size: 26000
Temporary Data Size: 1600
IMDB Cache:
PassThrough :1
Rest of the parameters are default.
I have created 2 read only cache groups and 1 asynchronous write-through cache group.
The first read only cache group is on a table A with 108 rows , the second read only cache group is on table B with 878689 rows and table C is the fact table with 20.5 million rows.
I have loaded these cache groups. Now I am trying to do join queries across these tables that do aggregation and group by on some measures in table C.
I have seen using dssize that the data has been loaded in permanent data area.
These queries execute in Oracle in around 5s and my expectation was that these would return in sub-seconds.
But these queries are not returning back at all even after hours. I have looked at the query plan and I do not have any lookups which say that they are not indexed.
I have even tried simpler join queries without any aggregation. Even those get stuck. The only queries that I have been able to run are select * from tables.
What may be wrong in this setup/configuration ? How do I debug what is causing the problem.
Thanks,
Mehta

Dear user2057059,
Could you specify more details about your question:
- Tables structure (columns, indexes, constraints)
- Could you post your query with execution plan
20 M rows it is not a big database especially for your hardware.
In my example:
CPU: Intel Core 2 Duo CPU 2.33 GHz,
RAM: 4 GB DDR2
HDD: 100 Gb SATA-II
OS: Fedora 8 x64 (Linux 2.6.23.1-42.fcb)
+
Oracle TimesTen 7.0.5.0.0 (64 bit Linux)
Command> select count(*) from accounts;
Query Optimizer Plan:
  STEP:                1
  LEVEL:               1
  OPERATION:           TblLkSerialScan
  TBLNAME:             ACCOUNTS
  IXNAME:              <NULL>
  INDEXED CONDITION:   <NULL>
  NOT INDEXED:         <NULL>
< 30000000 >
1 row found.
average time - 1,920321 sec, direct connection. Regards,
Gennady

Similar Messages

  • Document() not returning a useable node set?

    If I open another document during a transform, I can dump out the file using
    xsl:message, but can't match any tags in the loaded file. If I replace the Sun
    implementation with the apache implementation of xalan with -Xbootclasspath/p,
    the code runs fine.
    Example fragment:
    <xsl:variable name="user_type">
    <xsl:for-each select="//include/@file_name">
    <xsl:variable name="xmlFile">
    <xsl:value-of select="$xml_dir"/>/<xsl:value-of select="."/>
    </xsl:variable>
    <xsl:for-each select="document($xmlFile)" >
    <xsl:value-of select="key('type_def_key',$datatype)//datatype" />
    </xsl:for-each>
    </xsl:for-each>
    </xsl:variable>
    <xsl:message>
    User type <xsl:value-of select="$user_type" /> for <xsl:value-of select="$datatype" />
    </xsl:message>
    I believe the problem is with the document function not returning a useable
    node set , or all of the matches are going back to the original document rather
    than the one that was just open. With in the context of the new document
    select="." appears to work, but nothing else seems to.
    Output using Sun's xml implementation (JDK 5.0, 6.0 snapshot)
    User type for Gms_User_Id
    Output using the Apache xml implmentation (Xalan 2.6)
    User type numeric for Gms_User_Id
    The code and xml files are the same in both cases, just changed the runtime
    class path with

    This is behaving as designed.
    Configuring a view object so expect certain entity subtypes does not automatically restrict the rows it queries. If the view object's query returns any row that has a discriminator value that does not match the design-time configured entity subtypes expected, it is not added to the result rowset.
    If you have limited the number of fetched rows to 1, and if that row had a discriminator attribute value that does not match that/those of the expected entity subtypes, then the single row returned by the query will not be added to the result rowset. The result is that the row set has zero rows in it.

  • OCIStmtExecute does not return immediately when client is busy.

    Hi.
    I'm testing a very busy multi-threaded client server that consistently generates
    a large number of simple queries through oci. The problem is that, when the
    server(client) is busy, OCIStmtExecute does not return immediately in
    non-blocking mode.
    Of course I have set non-blocking mode and OCIStmtExecute does return
    OCI_STILL_EXECUTING immediately when the server is not busy. But
    when log rotation occurs which concatenates a large text file (~500MB)
    onto an even larger text file (up to several giga bytes), or when I simply copies
    or concatenates large text files manually, OCIStmtExecute returns very slowly.
    (roughly about after 100~200ms)
    However, while log rotation takes place, everything else including other oci
    calls that come before OCIStmtExecute (prepare, define) return fast. So
    for me it really seems that only OCIStmtExecute becomes extremely slower
    when local server (especially the disk) is busy.
    Is there any way to let OCIStmtExecute immediately return all the time?
    Thanks in advance.

    Yes, I knew that OCIStmtExecute would be the only function that causes such
    delay and that was why I traced that call. And so far, I checked several times
    what happens at the exact moment on the server but everything was ok.
    Actually OCIStmtExecute becomes slower exactly when crontab-ed log rotate
    occurs so I think this delay must be a client-side problem for now.
    This server is quite busy and has to respond fast, so it is important to
    guarantee fast response time while a small number of timeout losses are tolerable.
    But as OCIStmtExecutes' first OCI_STILL_EXECUTING return takes hundreds of
    ms it has become more like a blocking call and currently I cannot find any way to do what I want.
    So now everytime such thing happens, the thread waits
    quite long, after the first OCI_STILL_EXECUTING return
    the time difference exceeds timeout limit, and the thread
    calls OCIBreak() and OCIReset() and returns.

  • Why does my function not return anything when I create as a schema object

    I have user ABC who owns several tables some of which have foreign key constraints.
    I have user XYZ that has been granted access to all tables owned by user ABC.
    When I create a function as user XYZ using following I get no return when I issue:
    select XYZ.ztm_tables_depended_on('ABC', 'A_TABLE_OWNED_BY_ABC') from dual :
    Please see after function definition.
    CREATE OR REPLACE FUNCTION ZTM_TABLES_DEPENDED_ON(p_Owner VARCHAR2, p_Table_Name VARCHAR2) RETURN VARCHAR2 IS
      CURSOR C1 IS
      SELECT OWNER, CONSTRAINT_NAME, R_OWNER, R_CONSTRAINT_NAME
      FROM   ALL_CONSTRAINTS
      WHERE  OWNER           = p_Owner
      AND    TABLE_NAME      = p_Table_Name
      AND    CONSTRAINT_TYPE = 'R'
      ORDER  BY OWNER, CONSTRAINT_NAME, R_OWNER, R_CONSTRAINT_NAME;
      v_Referenced_Owner       VARCHAR2(31);
      v_Ret_Val                VARCHAR2(4000);
      FUNCTION CONSTRAINT_TABLE_NAME(p_Owner VARCHAR2, p_Constraint_Name VARCHAR2) RETURN VARCHAR2 IS
        CURSOR C1 IS
        SELECT TABLE_NAME
        FROM   ALL_CONSTRAINTS
        WHERE  OWNER           = p_Owner
        AND    CONSTRAINT_NAME = p_Constraint_Name;
        v_Ret_Val ALL_CONSTRAINTS.TABLE_NAME%TYPE;
      BEGIN
        OPEN  C1;
        FETCH C1 INTO v_Ret_Val;
        CLOSE C1;
        RETURN v_Ret_Val;
      END;
    BEGIN
      FOR R IN C1 LOOP
        IF (R.OWNER <> R.R_OWNER) THEN v_Referenced_Owner := R.R_OWNER || '.';
        ELSE                           v_Referenced_Owner := NULL;
        END IF;
        v_Ret_Val := v_Ret_Val || ', ' || v_Referenced_Owner || CONSTRAINT_TABLE_NAME (R.R_OWNER, R.R_CONSTRAINT_NAME);
      END LOOP;
      RETURN LTRIM(v_Ret_Val, ', ');
    END;
    But, if I embed the function within an anonymous block as follows, I get results:
    DECLARE
      CURSOR C1 IS
      select owner, table_name
      FROM   all_tables where owner = 'ABC';
      FUNCTION ZTM_TABLES_DEPENDED_ON(p_Owner VARCHAR2, p_Table_Name VARCHAR2) RETURN VARCHAR2 IS
        CURSOR C1 IS
        SELECT OWNER, CONSTRAINT_NAME, R_OWNER, R_CONSTRAINT_NAME
        FROM   ALL_CONSTRAINTS
        WHERE  OWNER           = p_Owner
        AND    TABLE_NAME      = p_Table_Name
        AND    CONSTRAINT_TYPE = 'R'
        ORDER  BY OWNER, CONSTRAINT_NAME, R_OWNER, R_CONSTRAINT_NAME;
        v_Referenced_Owner       VARCHAR2(31);
        v_Ret_Val                VARCHAR2(4000);
        FUNCTION CONSTRAINT_TABLE_NAME(p_Owner VARCHAR2, p_Constraint_Name VARCHAR2) RETURN VARCHAR2 IS
          CURSOR C1 IS
          SELECT TABLE_NAME
          FROM   ALL_CONSTRAINTS
          WHERE  OWNER           = p_Owner
          AND    CONSTRAINT_NAME = p_Constraint_Name;
          v_Ret_Val ALL_CONSTRAINTS.TABLE_NAME%TYPE;
        BEGIN
          OPEN  C1;
          FETCH C1 INTO v_Ret_Val;
          CLOSE C1;
          RETURN v_Ret_Val;
        END;
      BEGIN
        FOR R IN C1 LOOP
          IF (R.OWNER <> R.R_OWNER) THEN v_Referenced_Owner := R.R_OWNER || '.';
          ELSE                           v_Referenced_Owner := NULL;
          END IF;
          v_Ret_Val := v_Ret_Val || ', ' || v_Referenced_Owner || CONSTRAINT_TABLE_NAME (R.R_OWNER, R.R_CONSTRAINT_NAME);
        END LOOP;
        RETURN LTRIM(v_Ret_Val, ', ');
      END;
    BEGIN
      FOR R IN C1 LOOP
        DBMS_OUTPUT.PUT_LINE(ztm_tables_depended_on(R.Owner, R.Table_Name));
      END LOOP;
    END;
    Any ideas what is happening here?

    Any ideas what is happening here?
    Justin explained the probable reason.
    See the 'How Roles Work in PL/SQL Blocks' section of the database security doc for the details
    http://docs.oracle.com/cd/E25054_01/network.1111/e16543/authorization.htm#i1007304
    How Roles Work in PL/SQL Blocks
    The use of roles in a PL/SQL block depends on whether it is an anonymous block or a named block (stored procedure, function, or trigger), and whether it executes with definer's rights or invoker's rights.
    Roles Used in Named Blocks with Definer's Rights
    All roles are disabled in any named PL/SQL block (stored procedure, function, or trigger) that executes with definer's rights. Roles are not used for privilege checking and you cannot set roles within a definer's rights procedure.
    The SESSION_ROLES view shows all roles that are currently enabled. If a named PL/SQL block that executes with definer's rights queries SESSION_ROLES, then the query does not return any rows.
    Roles Used in Named Blocks with Invoker's Rights and Anonymous PL/SQL Blocks
    Named PL/SQL blocks that execute with invoker's rights and anonymous PL/SQL blocks are executed based on privileges granted through enabled roles. Current roles are used for privilege checking within an invoker's rights PL/SQL block. You can use dynamic SQL to set a role in the session.
    See that line starting with 'All roles are disables in any named PL/SQL block'?

  • Spatial Queries Not Always Producing Accurate Results

    Hi,
    Spatial queries are not always producing accurate results. Here are the issues. We would appreciate any clarification you could provide to resolve these issues.
    1. When querying for points inside a polygon that is not an MBR (minimum bounded rectangle), some of the coordinates returned are not inside the polygon. It is as though the primary filter is working, but not the secondary filter when using sdo_relate. How can we validate that the spatial query using sdo_relate is using the secondary filter?
    2. SDO_GEOM.VALIDATE_GEOMETRY_WITH_CONTEXT returns true when validating geometries even though we find results that are invalid.
    3. Illegal geodetic coordinates can be inserted into a table: latitude > 90.0, latitude < -90.0, longitude > 180.0 or longitude < -180.0.
    4. Querying for coordinates outside the MBR for the world where illegal coordinates existed did NOT return any rows, yet there were coordinates of long, lat: 181,91.
    The following are examples and information relating to the above-referenced points.
    select * from USER_SDO_GEOM_METADATA
    TABLE_NAME      COLUMN_NAME      DIMINFO(SDO_DIMNAME, SDO_LB, SDO_UB, SDO_TOLERANCE)      SRID
    LASTKNOWNPOSITIONS      THE_GEOM SDO_DIM_ARRAY(SDO_DIM_ELEMENT('X', -180, 180, .05), SDO_DIM_ELEMENT('Y', -90, 90, .05))      8307
    POSITIONS     THE_GEOM SDO_DIM_ARRAY(SDO_DIM_ELEMENT('X', -180, 180, .05), SDO_DIM_ELEMENT('Y', -90, 90, .05))      8307
    Example 1: Query for coordinates inside NON-rectangular polygon includes points outside of polygon.
    SELECT l.vesselid, l.latitude, l.longitude, TO_CHAR(l.observationtime,
    'YYYY-MM-DD HH24:MI:SS') as obstime FROM lastknownpositions l where
    SDO_RELATE(l.the_geom,SDO_GEOMETRY(2003, 8307, NULL,
    SDO_ELEM_INFO_ARRAY(1, 1003, 1),
    SDO_ORDINATE_ARRAY(-98.20268,18.05079,-57.30101,18.00705,-57.08229,
    54.66061,-98.59638,32.87842,-98.20268,18.05079)),'mask=inside')='TRUE'
    This query returns the following coordinates that are outside of the polygon:
    vesselid : 1152 obstime : 2005-08-24 06:00:00 long : -82.1 lat : 45.3
    vesselid : 3140 obstime : 2005-08-28 12:00:00 long : -80.6 lat : 44.6
    vesselid : 1253 obstime : 2005-08-22 09:00:00 long : -80.0 lat : 45.3
    Example 2a: Using SDO_GEOM.VALIDATE_GEOMETRY_WITH_CONTEXT
    Select areaid, the_geom,
    SDO_GEOM.VALIDATE_GEOMETRY_WITH_CONTEXT(the_geom, 0.005) from area where
    areaid=24
    ResultSet:
    AREAID THE_GEOM(SDO_GTYPE, SDO_SRID, SDO_POINT(X, Y, Z), SDO_ELEM_INFO,
    SDO_ORDINATES) SDO_GEOM.VALIDATE_GEOMETRY_WITH_CONTEXT(THE_GEOM,0.005)
    24 SDO_GEOMETRY(2003, 8307, NULL, SDO_ELEM_INFO_ARRAY(1, 1003, 1), SDO_ORDINATE_ARRAY(-98.20268, 18.05079, -57.30101, 18.00705, -57.08229, 54.66061, -98.59638, 32.87842, -98.20268, 18.05079)) TRUE
    Example 2b: Using SDO_GEOM.VALIDATE_GEOMETRY_WITH_CONTEXT
    Select positionid, vesselid, the_geom,
    SDO_GEOM.VALIDATE_GEOMETRY_WITH_CONTEXT(the_geom, 0.005) from positions where vesselid=1152
    ResultSet:
    POSITIONID VESSELID THE_GEOM(SDO_GTYPE, SDO_SRID, SDO_POINT(X, Y, Z),
    SDO_ELEM_INFO, SDO_ORDINATES) DO_GEOM.VALIDATE_GEOMETRY_WITH_CONTEXT(THE_GEOM,0.005)
    743811 1152 SDO_GEOMETRY(2001, 8307, SDO_POINT_TYPE(-82.1, 45.3, NULL), NULL, NULL) TRUE
    743812 1152 SDO_GEOMETRY(2001, 8307, SDO_POINT_TYPE(-82.1, 45.3, NULL), NULL, NULL) TRUE
    743813 1152 SDO_GEOMETRY(2001, 8307, SDO_POINT_TYPE(-80.2, 42.5, NULL), NULL, NULL) TRUE
    743814 1152 SDO_GEOMETRY(2001, 8307, SDO_POINT_TYPE(-80.2, 42.5, NULL), NULL, NULL) TRUE
    Example 3: Invalid Coordinate values found in POSITIONS table.
    SELECT p.positionid, p.latitude, p.longitude, p.the_geom FROM positions p
    WHERE p.latitude < -180.0
    2 lines from ResultSet:
    POSITIONID LATITUDE LONGITUDE THE_GEOM(SDO_GTYPE, SDO_SRID, SDO_POINT(X, Y, Z), SDO_ELEM_INFO, SDO_ORDINATES)
    714915 -210.85408 -79.74449 SDO_GEOMETRY(2001, 8307, SDO_POINT_TYPE(-79.74449, -210.85408, NULL), NULL, NULL)
    714938 -211.13632 -79.951256 SDO_GEOMETRY(2001, 8307, SDO_POINT_TYPE(-79.951256, -211.13632, NULL), NULL, NULL)
    SELECT p.positionid, p.latitude, p.longitude, p.the_geom FROM positions p
    WHERE p.longitude > 180.0
    3 lines from ResultSet:
    POSITIONID LATITUDE LONGITUDE THE_GEOM(SDO_GTYPE, SDO_SRID, SDO_POINT(X, Y, Z), SDO_ELEM_INFO, SDO_ORDINATES)
    588434 91 181 SDO_GEOMETRY(2001, 8307, SDO_POINT_TYPE(181, 91, NULL), NULL, NULL)
    589493 91 181 SDO_GEOMETRY(2001, 8307, SDO_POINT_TYPE(181, 91, NULL), NULL, NULL)
    589494 91 181 SDO_GEOMETRY(2001, 8307, SDO_POINT_TYPE(181, 91, NULL), NULL, NULL)
    Example 4: Failure to locate illegal coordinates by querying for disjoint coordinates outside of MBR for the world:
    SELECT p.vesselid, p.latitude, p.longitude, p.the_geom,
    TO_CHAR(p.observationtime, 'YYYY-MM-DD HH24:MI:SS') as obstime,
    SDO_GEOM.RELATE(p.the_geom, 'determine',
    SDO_GEOMETRY(2003, 8307, NULL,SDO_ELEM_INFO_ARRAY(1, 1003, 1),
    SDO_ORDINATE_ARRAY(-180.0,-90.0,180.0,-90.0,180.0,90.0,
    -180.0,90.0,-180.0,-90.0)), .005) relationship FROM positions p where
    SDO_GEOM.RELATE(p.the_geom, 'disjoint', SDO_GEOMETRY(2003, 8307,
    NULL,SDO_ELEM_INFO_ARRAY(1, 1003, 1),
    SDO_ORDINATE_ARRAY(-180.0,-90.0,180.0,-90.0,180.0,90.0,-80.0,90.0,
    -180.0,-90.0)),.005)='TRUE'
    no rows selected
    Carol Saah

    Hi Carol,
    1) I think the results are correct. Note in a geodetic coordinate system adjacent points in a linestring or polygon are connected via geodesics. You are probably applying planar thinking to an ellipsoidal problem! I don't have time to do the full analysis right now, but a first guess is that is what is happening.
    2) The query window seems to be valid. I don't think this is a problem.
    3) Oracle will let you insert most anything into a table. In the index, it probably wraps. If you validate, I think the validation routines will tell you is is illegal if you use the signature with diminfo, where the coordinate system bounds are included in the validation.
    4) Your query window is not valid. Your data is not valid. As the previous reply stated, you need to have valid data. If you think in terms of a geodetic coordinate system, you will realize that -180.0,-90.0 and 180.0,-90.0 are really the same point. Also, Oracle has a rule that polygon geometries cannot be greater than half the surface of the Earth.
    Hope this helps.

  • I found out issue distinct clause in query ,refcursor not returning rows

    URGENT
    hi the following procedure returns records  but when i add  distinct clause to (open v_refcursor for select )
    i.e   open v_refcursor for select distinct column1,column2   .......... in the procedure the procedure is not returning records. please help what is the issue here?
    CREATE OR REPLACE procedure proc_shared_report3 (in_cust_id varchar2,in_user_array user_tab , in_acct_array acct_tab,in_report_pvlg_hier varchar2,in_fmt_pvlg_hier varchar2,v_refcursor OUT sys_refcursor)
    is
    BEGIN
    if  in_acct_array.count>0 and in_fmt_pvlg_hier is not null and in_cust_id is not null and in_user_array.count>0 and in_report_pvlg_hier is not null  then
    dbms_output.put_line('all are not null');
    *open v_refcursor for  select usr_id* from vw_get_user_profile where usr_id in (
            SELECT USR_ID FROM VW_GET_ACCOUNTS WHERE USR_ID IN (SELECT distinct up1.usr_id FROM vw_get_user_privileges up1,vw_get_user_privileges up2  WHERE 
            up1.usr_id=up2.usr_id and
            up1.product_hierarchy=in_report_pvlg_hier  and up2.PRODUCT_HIERARCHY=in_fmt_pvlg_hier AND up1.usr_id in (select usr_id from vw_get_user_for_customer where
            cust_id=in_cust_id)
            and up1.usr_id Member Of in_user_array) AND acct_nb Member of in_acct_array);
    /* if account list is null and rest all  not  null */
    elsif (in_acct_array is null or in_acct_array IS EMPTY or in_acct_array.count = 0) and in_fmt_pvlg_hier is not  null  and in_cust_id is not null and in_user_array.count>0 and in_report_pvlg_hier is not null  then
    dbms_output.put_line('acc is null and rest are not null');
    *open v_refcursor for select usr_id,usr_type_cd* from vw_get_user_profile where usr_id in(SELECT  distinct up1.usr_id FROM vw_get_user_privileges up1,vw_get_user_privileges up2  WHERE 
              up1.usr_id=up2.usr_id and
              up1.product_hierarchy=in_report_pvlg_hier and up2.PRODUCT_HIERARCHY=in_fmt_pvlg_hier AND   up1.usr_id in (select usr_id from vw_get_user_for_customer where
              cust_id=in_cust_id)
              and up1.usr_id Member Of in_user_array);
    /* if account list is null and format pvlg hierarchy is null */
    elsif  (in_acct_array is null or in_acct_array IS EMPTY or in_acct_array.count = 0) and in_fmt_pvlg_hier is null and in_cust_id is not null and in_user_array.count>0 and in_report_pvlg_hier is not null
    then
    dbms_output.put_line('acc is null and format is null null');
    *open v_refcursor for select usr_id,tmzon_cd* from vw_get_user_profile where usr_id in (
      (SELECT  distinct usr_id FROM vw_get_user_privileges  WHERE  product_hierarchy=in_report_pvlg_hier  AND   usr_id in (select usr_id from vw_get_user_for_customer where
       cust_id=in_cust_id)
       and usr_id Member Of in_user_array))  ;
    /* if account list is not null and format pvlg hierarchy is  null */
    else  -- If i get only one privilege and all other inputs then 
    dbms_output.put_line('acc list is not null and format pvlg is null');
    *open v_refcursor for select usr_id,prod_shrt_nm* from vw_get_user_profile where usr_id in (
         SELECT USR_ID FROM VW_GET_ACCOUNTS WHERE USR_ID IN (SELECT  distinct usr_id FROM vw_get_user_privileges  WHERE 
         product_hierarchy=in_report_pvlg_hier  AND   usr_id in (select usr_id from vw_get_user_for_customer where
         cust_id=in_cust_id) and usr_id Member Of in_user_array ) AND acct_nb Member of in_acct_array);
    END IF;
    END proc_shared_report3;
    /Edited by: raj_fresher on Aug 12, 2009 8:50 AM
    Edited by: raj_fresher on Aug 12, 2009 1:40 PM

    its like this ...... ?
    You have to understand that without any testdata/ a reproducable testcase, all I can do is asking you to test some alternatives, I'm just guessing things from what I can make of your example, and hope you'll repost back any differences in a clear and concisive way.
    What I didn't understand is why you're not just joining your tables/views instead of using inner queries.
    Unfortunatly you're not providing any feedback on that, on the other hand you've narrowed down your problem to a specific part of your query? But you're not sharing why/what makes you decide to take that turn.
    But: you're still using inner queries in your reply.
    So, sorry, I cannot proceed/say anything useful unless you answer my previous post.
    Because, there are still many other questions/doubts: I don't understand why you're not aliasing your columns properly, for example.
    I don't know why you've suddenly switched to VW_GET_USER_cust_sa?
    I don't know if you've tried the straight join instead of nesting queries?
    And if you did, what's the problem?
    Please read and understand this, so you'll get your answers by the speed of light the next time you've got a question or problem:
    http://tkyte.blogspot.com/2005/06/how-to-ask-questions.html
    And don't refrain from using that tag ;)                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                   

  • Custom reports in R12 not returning any data

    I'm seeing an issue of custom reports in R12 not returning any data. So far what I've been able to dig into is that the report query itself is referring to APPS synonyms that are secured under MOAC. What I am able to do is run the query in SQL Developer after having run mo_global.set_policy_context and I receive the data I'm looking for. This is not the case when running the report through the concurrent programs request.
    Secondly, I've also configured the concurrent program request to run as single operating unit mode. This aided in fixing any issues with LOVs on the parameter form. However, still no results from the report itself. I've also searched around quite extensively and added a P_CONC_PROGRAM_ID user parameter in reports developer, as well as adding SRW.USER_EXIT('FND SRWINIT') to the before report trigger and SRW.USER_EXIT('FND SRWEXIT') to the after report trigger as well. No luck. I've also tried running mo_global.set_policy_context on the before report trigger and no such luck.
    Has anyone seen similar issues with this and could shed some light? At this point, the workaround I would see is to just change the query to use respective all tables and match up against the orgid, but I'd rather not do that.

    I am facing exactly the same issue with 2 of my custom Reports in R12.1.3. We did our level best to find out the issue. It was always a hit and trial fix.
    But the following were our observations and workarounds
    0) The reports Conc Prog were in MOAC - Single mode. Both the reports had SRW INIT, EXIT and P_CONC_REQUEST_ID
    1) The reports which caused the issues[NO data fetched for MOAC] had multiple Queries in the data model and one Query fetches data and the 2nd one doesnt. Both the queries when run from SQL developer after setting the org gave results.
    Workaround for Report1
    a) We deleted the data link connecting the 2 columns in both the queries and added a bind variable instead in the 2nd /child query to refer to the column in the parent query and linked the 2 groups.
    Workaround for Report 2
    a) In report 2 the queries were not linked to each other. Only one query used to return results and the 2nd one returned 0 rows. The workaround was to use the "where" conditions as lexical parameters after having set the value for the same in before report trigger.
    Both the work around looks illogical. Still It worked..!
    Thanks
    Biju Radhakrishnan

  • Timesten Queries

    Dear Forum,
    Background: We have single database instance, 22 application instances - (RAC is not allowed solution as per higher management decision)
    If we implement IMDB Cache:
    1. Will there be any overhead associated with reads and writes (Synchronous) ? If yes, how do we quantify the same.
    2. Application demands the use of the global cache (as transactional data). Will there be latency issues owing to multiple nodes. How oracle manages the node in the cache grid.
    3. What is the optimum grid size to be proposed. Is there any guideline (for e.g. 8 grid, 16 or for all 22 grids)
    4. What is the minimum infrastructure required
    5. The application uses XML file in terms of BLOB/CLOB. Is Timesten okay to handle BLOB/CLOB.
    Sorry for flooding with the queries. But any pointer/advise/guidance would be greatly appreciated.
    Thanks
    ORA_IMPL

    Thanks for your detailed answer. This really helps and gives me an insight into the TT. Appreciate if you can answer the following cross queries tagged as 'DM and underlined text'
    thanks for sharing experience, your contribution is very helpful for junior collegues like us.
    Hi,
    Thes questions are not easily answerd in isolation. A decision to adopt TimesTen, and especially TimesTen Grid, needs to take int oaccoutn many factors. For example, adding TimesTen into the picture will not be transparent to the applicaiton. Application code changes, possibly significant ones, will be needed. Also, the beyhaviour and characteristics of cache grid impose constraints on the kind of SQL that is grid capable. Also, application data access needs to be (a) mostly localised to the grid node wher the data currently rsides and (b) have a strong temporal locality of reference if good performance is to be achieved. So, there is a lot to consider and investigate before you can make an informed decision as to whether TT cache grid is the correct solution for you. Having said that, I will try to answer your questions as best I can:
    1. Will there be any overhead associated with reads and writes (Synchronous) ? If yes, how do we quantify the same.
    CJ>> I don't really understand this question. Reads would normally be satisfied from the IMDB Cache and so will be much, much faster than reading from Oracle. For grid capable queries that reference data not present in the local grid node the data will be retrieved from another grid node (preferentially) or if the data is not in any grid node then it will be retrieved from oracle. Clearly there is significant overhead in thsi case compared to the case where the data is already present in the local grid node. There are many factors thta can affect the relative performance but let us say that typically a local access will take a few microseconds, a retrieval from another grid member a millisecond or two and a retrieval from oracle may take several milliseconds. The only way to know is to test your queries with your schema on your typical hardware and measure it. For write operations, all changes are propagated asynchronously to Oracle (there is no synchronous option) and so overhead is very small and updates are very fast (much faster than in Oracle). However, if a query/DML tries to access a row located in a different grid member and for which there are comitted, unpropagated updates then that access will block until those updates have rached Oracle (to ensure data consistency). In this case there could be a large overhead. Again, this depends very much on your specific setup and data access patterns and so you need to measure it.
    DM: if there are any unpropogated changes in node 1, node 2 tries to read the same data. Which data the node 2 will read (the old data or the node 2 read will be blocked OR the node 1 unpropogated changes are committed & node 2 read is facilitated)
    2. Application demands the use of the global cache (as transactional data). Will there be latency issues owing to multiple nodes. How oracle manages the node in the cache grid.
    CJ>> That question has a big answer! Best to read some of the information on Cache Grid in the documentation. Essentially cache grid implements the concept of data ownership and enforces that there will only ever be one copy of any piece of cached data within the grid at any one time. Only the grid node that 'owns' the data can read/write it. If a different grid node needs access to the data the data itself is transferred to that grid node and the ownership updated. Depednin on how data is distributed and the application access pattern there may be very little overhead or a great deal of overhead. Your application must be designed in a 'grid aware' fashion to gain full benefit from grid.
    DM: The database contains around 60 % of semi static reference data. This data is used by all different applications dynamically to validate their work content. Essentially all of these application are reading the same data. It is a prime requirement that any changes to the semi static ref data is propogated in timely fashion to all downstream processing
    3. What is the optimum grid size to be proposed. Is there any guideline (for e.g. 8 grid, 16 or for all 22 grids)
    CJ>> There is no one optimum size. It dpends on data volumes, type of hardware and the workload to be supported.
    4. What is the minimum infrastructure required
    CJ>> Realistically you need to use replicated grid nodes and Oracle Clusterware for grid management. The very smallest grid that could be configyured is therefore two physical machines (each acting as the replicaiton standby for the other) running TimesTen and Clusterware. You also need,m for Clusterware, some supported shared stirage for OCR and voting disks (e.g. NetAPp filers or some kind of SAN).
    5. The application uses XML file in terms of BLOB/CLOB. Is Timesten okay to handle BLOB/CLOB.
    CJ>> TimesTen does not currently support XML nor CLOB/BLOBs. You can cache Oracle CLOB/BLOB data in TimesTen VARCHAR2/VCARBINARY columns. Access to this data in TimesTen will be via the SQL API and you must retrieve / update the entoire CLOB/BLOB value. Any XML interpretation must be done by the application.
    DM: Currently all the applications to and fro communications are happening in CLOB/BLOB and XML. If we have to rely on SQL API and update the whole CLOB/BLOB value, it seems that it will increase the processing time. I see this may be a constraining factor to use the TT.
    Context: Application deals majority of the times with XML files. It receives payment in XML, conversion to internal format (again xml), applying business logics and converting it again to the outbound formats (xml) and sending across. They are currently stored as BLOBS and CLOBS in database. Size of database is 4TB.
    There is one single database instance and 20 application instances. Application uses JDO as their ORM framework. With the growing volumes of data, JDO along with multiple times calls to database during the processing of each transaction(mainly due to blobs/clobs) is known bottleneck for the performance.
    I am trying to find the improvement solution. TT may not fit the bill. Any other oracle feature worth investinging.

  • UTL File not return any values in .CSV oracle plsql hrms

    Hello experts,
    We want to extract employee details from Core hrms with output as .csv file :
    i have created below package for utl but not return any values in csv file pls help me on this.:::
    CREATE OR REPLACE PACKAGE BODY APPS."XXPER_PAYROLL_XIR_PKG"
    AS
    procedure payroll_main (
    in_from_date IN date,
    in_to_date IN date,
    p_errbuf OUT VARCHAR2,
    p_retcode OUT NUMBER
    ) is
    cursor c_person is
    SELECT distinct ppf.person_id
    FROM per_person_types ppt,
    pay_payrolls_f pp,
    per_person_type_usages_f pptu,
    per_all_people_f ppf,
    per_all_assignments_f ppa
    WHERE ppf.person_id = pptu.person_id
    AND pptu.person_type_id = ppt.person_type_id
    AND ppt.user_person_type in ('Employee', 'Ex-employee' ,'Intern')
    AND ppf.person_id = ppa.person_id
    AND ppa.assignment_type = 'E'
    AND ppa.primary_flag = 'Y'
    AND ppa.payroll_id = pp.payroll_id
    -- and nvl(ppa.ass_attribute10, 'N') = 'Y' -- this is in Production
    AND pp.payroll_name = 'Calendar Payroll'
    AND trunc( ppf.last_update_date) between in_from_date and in_to_date
    AND trunc( ppa.last_update_date) between in_from_date and in_to_date
    AND trunc( ppt.last_update_date) between in_from_date and in_to_date
    AND trunc( pptu.last_update_date) between in_from_date and in_to_date
    AND trunc(sysdate) between ppf.effective_start_date and ppf.effective_end_date
    AND trunc(sysdate) between ppa.effective_start_date and ppa.effective_end_date
    AND trunc(sysdate) between ppf.effective_start_date and ppf.effective_end_date
    AND trunc(sysdate) between pptu.effective_start_date and pptu.effective_end_date
    AND ppf.employee_number <> 'NONE';
    v_person_id number;
    v_dom_eff_dt date;
    io_err_flag varchar2(1);
    io_err_desc varchar2(2000);
    io_record_update boolean;
    BEGIN
    dbms_output.put_line('in_from_date-' || in_from_date);
    dbms_output.put_line('in_to_date-' || in_to_date );
    open c_person;
    loop
    fetch c_person into v_person_id;
    exit when c_person%NOTFOUND;
    get_person_information( v_person_id ,
    in_from_date ,
    in_to_date ,
    io_err_flag ,
    io_err_desc );
    dbms_output.put_line('Per_id-' || v_person_id || ': ' || 'Payroll_main' || sqlcode || sqlerrm );
    end loop;
    close c_person;
    end payroll_main;
    PROCEDURE get_person_information(in_person_id IN number,
    in_from_date IN date,
    in_to_date IN date,
    io_err_flag IN OUT varchar2,
    io_err_desc IN OUT varchar2) is
    --v_service_date        DATE;
    v_data_file UTL_FILE.FILE_TYPE;
    v_record VARCHAR2(1000);
    v_header VARCHAR2(1000);
    v_record_mail VARCHAR2(1000);
    v_file_name VARCHAR2(100);
    v_file_name_mail VARCHAR2(100);
    v_ser_name VARCHAR2(50);
    v_dir VARCHAR2(100) :='XX_XIR_HR_PAY';
    v_con_req_id NUMBER;
    v_user_id NUMBER;
    v_resp_id NUMBER;
    v_resp_appl_id NUMBER;
    p_request_id NUMBER;
    P_status VARCHAR2(100);
    cursor c1 is
    SELECT ppf.person_id per_id,
    ppf.employee_number emp_num,
    ppf.first_name f_name,
    ppf.last_name l_name,
    ppf.national_identifier nat_id,
    ppf.date_of_birth dat_birth,
    ppf.sex sex_g,
    -- TO_DATE(ppf.attribute1,'DD-MON-YYYY') att_date,
    ppt.user_person_type per_type,
    ppf.email_address email_id,
    ppf.last_update_date last_upd_date
    FROM per_person_type_usages_f pptu,
    per_person_types ppt,
    per_people_f ppf
    WHERE ppf.person_id = in_person_id
    AND ppf.person_id = pptu.person_id
    AND pptu.person_type_id = ppt.person_type_id
    and trunc(ppf.last_update_date) between in_from_date and in_to_date
    and trunc( pptu.last_update_date) between in_from_date and in_to_date
    AND trunc( ppt.last_update_date) between in_from_date and in_to_date
    AND trunc(sysdate) between ppf.effective_start_date and ppf.effective_end_date
    AND trunc(sysdate) between pptu.effective_start_date and pptu.effective_end_date
    AND ppt.user_person_type in ('Employee', 'Ex-employee' ,'Intern')
    and ppt.system_person_type in ('EX_EMP', 'EMP') ;
    cursor c2 is
    SELECT substr(pa.address_line1, 1, 60) address_line1,
    substr(pa.address_line2, 1, 60) address_line2,
    pa.town_or_city city,
    pa.region_2,
    pa.postal_code,
    pa.region_1,
    pa.country,
    pa.date_from,
    nvl(pa.date_to, to_date('12/31/4712', 'mm/dd/yyyy')),
    pa.creation_date,
    pa.last_update_date
    FROM per_addresses pa
    WHERE pa.person_id = in_person_id
    AND pa.last_update_date between in_from_date and in_to_date
    AND trunc(sysdate) between date_from and nvl(date_to, to_date('12/31/4712', 'mm/dd/yyyy'))
    AND pa.primary_flag = 'Y';
    cursor c3 is
    SELECT pp.phone_number pnum
    FROM per_phones pp
    WHERE pp.parent_id = in_person_id
    AND pp.last_update_date between in_from_date and in_to_date
    AND trunc(sysdate) between date_from and nvl(date_to, to_date('12/31/4712', 'mm/dd/yyyy'))
    AND phone_type = 'H1';
    cursor c4 is
    SELECT ppos.period_of_service_id,
    ppos.date_start start_date,
    ppos.adjusted_svc_date,
    ppos.actual_termination_date,
    TO_DATE(ppos.attribute1,'yyyy/mm/dd hh24:mi:ss') last_day
    FROM per_periods_of_service ppos
    WHERE ppos.person_id = in_person_id
    AND ppos.last_update_date between in_from_date and in_to_date
    AND ppos.date_start = ( select max(date_start)
    from per_periods_of_service
    where person_id = in_person_id
    and trunc(date_start) <= trunc(sysdate) );
    Begin
    dbms_output.put_line('Directory:'||v_dir);
    BEGIN
    v_header := 'Employee Number'||CHR(9)
    ||'Fore Name'||CHR(9)
    ||'Sur Name'||CHR(9)
    ||'PPS'||CHR(9)
    ||'Date of birth'||CHR(9)
    ||'Gender'||CHR(9)
    ||'Person Type'||CHR(9)
    ||'Email Address'||CHR(9)
    ||'Last Update Date'||CHR(9)
    ||'Address1'||CHR(9)
    ||'Address2'||CHR(9)
    ||'Phone_Number'||CHR(9)
    ||'Start Date'||CHR(9)
    ||'Last Day';
    v_file_name :='XIR_DATA.xls';
    dbms_output.put_line('Excel_file_name:'||v_file_name);
    v_data_file :=UTL_FILE.FOPEN(v_dir,v_file_name, 'W');
    UTL_FILE.PUT_LINE(v_data_file,v_header);
    FOR i IN c1 LOOP
    FOR j IN c2 LOOP
    FOR k IN c3 LOOP
    FOR m IN c4 LOOP
    dbms_output.put_line(' excel file loop ');
    v_record:=i.emp_num||CHR(9)
    ||i.f_name||CHR(9)
    ||i.l_name||CHR(9)
    ||i.nat_id||CHR(9)
    ||i.dat_birth||CHR(9)
    ||i.sex_g||CHR(9)
    ||i.per_type||CHR(9)
    ||i.email_id||CHR(9)
    ||i.last_upd_date||CHR(9)
    || j.address_line1||CHR(9)
    ||j.address_line2||CHR(9)
    ||k.pnum||CHR(9)
    ||m.start_date||CHR(9)
    ||m.last_day;
    UTL_FILE.PUT_LINE(v_data_file,v_record);
    END LOOP;
    END LOOP;
    END LOOP;
    END LOOP;
    UTL_FILE.FCLOSE(v_data_file);
    dbms_output.put_line(' close excel file');
    EXCEPTION
    WHEN OTHERS THEN
    dbms_output.put_line('unable to create data excel file in proc1'||SUBSTR(SQLERRM,1,25));
    END;
    end get_person_information;
    END xxper_payroll_xir_pkg;
    EE Number     Forename     Surname     Dept     effective Start Date_dept     Start Date     Leave Date     effective Start Date_leave_Dt     Date of Birth     Address1     
    Thanks
    Edited by: 981527 on Feb 12, 2013 5:59 AM
    Edited by: 981527 on Feb 12, 2013 6:05 AM

    Did you make sure that the queries are returning values in the first place? From the last few lines, it looks like you have the header written to the file. Is that correct?
    -Karthik

  • Toplink does not return the object sometimes - VeryHigh Volume application

    Hi All,
    Toplink version: 10.1.3.4, OAS 10.1.3.4, Oracle 10g, RedHat linux 4.0
    We have a high volume application where toplink issues one of our queries like 25K times/hour.Query is a simple one querying the table based on the primary key(Id) that should return 1 object.
    This query is not returning the object sometimes I should say like 200 times/hour. What bothers me is that the same query on the same Id(primary-key) returns the object another time.
    I did enable the toplink logging level to FINE where I could see the SQL query getting issued to the DB, I was unsuccessful in changing the logging level to ALL to check if the object is getting returned by the toplink layer and being registered in the cache kind of details.
    This is not easily re-producable as this only occurs in our Pre-Prod environment where we can simulate high volume. This completely baffles me and I am running out of ideas.
    Thanks in advance.
    -Ram
    Edited by: Raam on Jun 4, 2009 7:41 PM

    If you are configured for an isolated cache then there should be no concurrency issues. Each thread will query the database directly and load the object into the UnitOfWork. Is there an Exception handler configured on TopLink? Are you getting a timeout exception or some other exception that is being ignored?
    --Gordon                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                   

  • HT3702 I contacted a vendor about an app and they have not returned my email. I want to cancel the purchase.  What should I do?

    contacted a vendor about an app and they have not returned my email. I want to cancel the purchase.  What should I do?

    You probably cannot get a refund, since the terms of sale for the iTunes Store state that all sales are final. You can contact the iTunes Store, explain the reason for your request, and ask, though:
    http://www.apple.com/support/itunes/contact.html
    It's possible they'll make an exception for you, particularly if the app is defective in some way. If you just don't like or no longer need the app, a refund is unlikely.
    Good luck.
    Forum Tip: Since you're new here, you've probably not discovered the Search feature available on every Communities page, but next time, it might save you time (and everyone else from having to answer the same question multiple times) if you search a couple of ways for a topic, both in the relevant forums and in the Apple Knowledge Base, before you post a question.

  • SAP BW Bex 7.3 Queries not working with Enterprise Portal (EP) 7.3 ABAP stack only

    SAP BW Bex 7.3 Queries not working with Enterprise Portal (EP) 7.3 ABAP Stack
    Dear Portal Gurus,
    we want to integrate SAP BW Bex Queries 7.3 into an Enterprise Portal 7.3 EP ABAP Stack only installation.
    1) We have Done SSO between EP And BI System
    2) System Object is Created
    3) When we trying to create Iview of BEX 7.3 report, its executing that report on BI server, we don't have java stack on BI System
    4) We have updated the required table in Bi system for executing the report on EP
    5) RFC is OK, System Object Test is OK, Report is working independently from Business Explorer, but not working from EP
    There no irj services available on BI system, as it is only ABAP stack.
    Can anybody help us in achieving our requirement
    Thanks in advance
    Michael Wecker

    Hi Michael,
    To integrate portal with BI you need to perform BI Portal integration steps.
    Refer to a document link below for guidance purpose.
    http://www.sdn.sap.com/irj/scn/go/portal/prtroot/docs/library/uuid/b0a5216a-349c-2a10-9baf-9d4797349f6a?overridelayout=t…
    Ensure that you have performed all these relevant steps.
    Hope this helps.
    Regards,
    Deepak Kori

  • I'm having all sorts of connection issues on a iPhone i brought in the uk 6 months ago. am i able to use a genius bar abroad as I'm not returning to the uk?

    i brought a iphone 5 back in feb/march and since then its slowley getting worse and worse, my wifi hardly pickes up any signals and when it does they drop out all the time. my girlfriends iphone4 picks up and has no trouble holding onto the wifi, whilst sitting at the same table mine shows no signals. Im not returning to the uk any time soon so i want to know if its possible to visit an apple store for replacement abroad? im only 6 months into my apple care so i wouldnt think this should pose an issue. any help would be fantastic
    many thanks

    they will not repair the device in the us you must send it back to the country of purchase or set up repair through them the uk apple car number is as follows (44) 0844 209 0611*** you should be able to set up a mail repair through them

  • T-SQL: SET Statement Using a Case Statement is not returning a value

    SQL VER:  2008
    Please see the attached screenshot from SSMS.  The Set statement  below is not returned a value or may be returning a zero value.  The AllocPercent field is set to zero in code preceeding this Update Statement:
    AllocPercent
    =
    CASE
    WHEN AllocBase
    IS
    NULL
    OR AllocBase
    = 0
    THEN 0
    ELSE PM_Input/AllocBase
    END
    As you can see by the screen shot from the Select Statement that is displaying the results after the Update Statement is completed, the value of AllocPercent is = 0 even though all the values necessary for it to compute a value > 0 are present in the
    table.  Incidentally, the values in both the AllocBase and PM_Input fields are the same before the Update Statement is ran, as shown in the Select ran after the Update Statement.
    I have ran this type of code several times with never a problem.  This seems really simple, but I just can't seem to get it to compute. 
    Any help would be greatly appreciated.
    Regards,
    bob sutor
    Bob Sutor

    Yep--This statement is running in a Stored Procedure and inside its own block statement.
    I did get a response that suggested I change the SET statement as follows, that worked:
    AllocPercent
    = CASE WHEN AllocBase IS NULL OR AllocBase = 0 THEN 0.00 ELSE cast(PM_Input*1.0/AllocBase*1.0
     as decimal(6,2)) END
    Your response did get me thinking which was great!  I appreciate your help.
    Regards,
    ...bob sutor
    Bob Sutor

  • ExecuteWithParams is not returning the rows from cache

    Hi,
    We have two VO's based on same entity ,whenever new row is inserted in one VO and we do executeWithParams on another VO ,It is not returning the rows which are uncommitted.
    I replaced the executeWithParams with a custom method in AM in which I am executing the query after applying the view criteria and setting query mode.Its returning rows which are uncommitted.
    Jdev Version which I am using is 11.1.1.7.0
    ExecuteWithParams used to work with uncommitted data in 11.1.1.6.0 .
    Do we need to raise a bug on framework or Am I missing something?
    Edited by: sharavnkumar_malla on Oct 18, 2012 1:02 AM

    Still the same answer, even after you updated the question. We (the public) don't have 11.1.1.7, know nothing about 11.1.1.7, and cannot help you with 11.1.1.7. Questions about internal builds belong on internal forums.

Maybe you are looking for

  • There are multiple users with the same display name

    Hi, We have a user and when she get an item assigned to her she sees the following alert: "There are multiple users with the same display name USERNAME and at least one of them does not have read permissions to some of the files" Now I looked in the

  • Downloading files from DVD or hard drive cameras

    Is it possible to download files from hard drive or dvd cameras to be used in Premiere without using the capture function which needs to be done in real time?

  • Error occurred in the execution of ODCIINDEXCREATE routine

    Hi, We're using oracle 9.2.0.1. I've created a table with a spatial data column. Then inserted some rows and then created a spatial index on that column. What is peculiar is; depending on the number of the rows inserted I receive below error; ORA-298

  • Free Webinar - Situational Analysis at OnStar with Oracle Spatial Feb. 22

    Join us for the next Directions webinar: Situational Analysis at OnStar with Oracle Spatial When:  Wednesday, February 22nd, 2:00 PM - 3:00 PM EST Register now:  https://www2.gotomeeting.com/register/237138906 OnStar is the largest telematics solutio

  • IPhoto 8 wont import or recognize Library

    Can you help ? My iPhoto Library is stored on an external 500GB iomega and has been spot on until my nephew 'did something' while I was out. Iphoto now has no photos at all and I cant import the iPhoto Library even when open>alt>select library. I hav