ERROR IN PHYSICAL TABLES JOIN

The error i receive while performing global consistency check is : [38091] Physical table 'D_TIME__EVENT_TIME' joins to non-fact table 'FS_IND_SUBS_RGE_ACT' that is outside of its time dimension table source 'D_TIME__EVENT_TIME'.
I have had this problem for some time and it is getting frustrating. the table D_TIME__EVENT_TIME is an alias of the d_time_event table. i have created a foreign key between both tables D_TIME__EVENT_TIME (time dimension) and FS_IND_SUBS_RGE_ACT in the physical layer. but everytime I check for consistency, i get the error.
I have created multiple star schemas using the time dimension table There are a couple of other tables that have necessitated creating physical foreign keys with D_TIME__EVENT_TIME but had no errors.
I have on the side created another alias (d_time_) of the parent table to validate the steps taken. I have created a physical foreign key with the d_time_ table and fs_ind_subs_rge_act table and this was successful.
I am at loggerheads on what to do next. i need some help any help
Edited by: 794286 on Sep 20, 2010 11:35 AM

hi,
Refer Re: Time Dimension Problem joe mentioned some good points
thanks,
saichand.v

Similar Messages

  • Error: 38015, Physical tables have multiple joins.

    Hi,
    I have 5 dimensions and 1 fact table. One of the dimension table have 2 keys, which are referenced with fact table.
    I have created aliases for all table on which I have defined joins.
    But, It is giving me error like
    ERRORS:
    GLOBAL:
    *[38015] Physical tables "obidb".."ORDER_DETAILS"."FACT" and "obidb".."ORDER_DETAILS"."BILLING_ACCOUNT" have multiple joins.*
    Delete new foreign key object if it is a duplicate of existing foreign key.
    Please give me any suggestions.....
    Thanks.

    Hi,
    Did your deleted existing foreign key before joining the alis_dim1(fk1), dim1(fk2) to fact join?
    double check u r model its comes like circular join so by using alias method u r can resolve that issue.
    In your model just check all your FK relation ship here u can find FK ending with #1 (double time just delete them and check metadata consitancey) if its not working delete the dimension and import it newly then create alias of the dim then join each other required fact also check below link
    http://mtalavera.wordpress.com/2012/03/29/obieerpd-fails-global-consistency-on-joins-between-tables/
    Thanks
    Deva
    Edited by: Devarasu on Nov 23, 2012 4:44 PM

  • Error when Check global consistency: Physical tables have multiple Joins

    Hi
    I have a table that have multiple joins with a dimension in the physical layer, this is a fact table and the dimension is a geograhic dimension, and in the fact table I have three codes, customer geography, account geography and office geography. This is a simple model and is correct for my DWH. However when I want to check global consistency the consistency check manager display the next error (three times):
    ERRORS:
    GLOBAL:
    [38015] Physical tables "ODS".."ODS"."FT_INTERFAZ_CICLO_FACTURACION" and "ODS".."ODS"."DIM_GEOGRAFIA" have multiple joins. Delete new foreign key object if it is a duplicate of existing foreign key.
    [38015] Physical tables "ODS".."ODS"."FT_INTERFAZ_CICLO_FACTURACION" and "ODS".."ODS"."DIM_GEOGRAFIA" have multiple joins. Delete new foreign key object if it is a duplicate of existing foreign key.
    [38015] Physical tables "ODS".."ODS"."FT_INTERFAZ_CICLO_FACTURACION" and "ODS".."ODS"."DIM_GEOGRAFIA" have multiple joins. Delete new foreign key object if it is a duplicate of existing foreign key.
    How can I do to solve this error?
    Thanks
    Edwin

    I have one dim table name team.
    In the dim table there are two primary keys like Team key and Team Type key.
    In the Fact table there are 4 foriegnkey like
    a) Sales team key
    b) Sales team type key
    c) Trader team key
    d) Trader team type key
    For this purpose , i am going to create the alias table in the physical layer. Can any body explain to me the whole process

  • Error 39008: logical table does not join to fact source

    About to lose my mind over this error!
    I'm told logical table IT_WORK_ITEM_D (a dimension) does not join to any fact source, although it should show as joined to IT_WORK_ITEM_DSNPSHT_F
    - I have verified the physical joins
    - I have verified the business model joins
    - I have created hierarchies for all logical tables joined to the fact (IT_WORK_ITEM_DSNPSHT_F)
    - I have checked the Content tab for the fact table and ensured that the logical dimension is set to the lowset level.
    Suggestions from here?
    -John

    This was helpful, although it did not solve the problem immediately.
    After much consistency checking, I cleared out some of the content assignments and that seemed to work (after having not worked). I still don't feel like I identified the core problem, but it is working now.
    -John

  • Could not able to join physical tables in obiee using a dataconversion func

    Hi,
    i am trying to join physical tables wc_perf_ratings_d and w_wrkfc_evt_month_f with condition as
    "Oracle Data Warehouse"."Catalog"."dbo"."Dim_WC_PERF_RATINGS_D"."RATING_CD" = to_char(round( "Oracle Data Warehouse"."Catalog"."dbo"."Fact_W_WRKFC_EVT_MONTH_F"."ORIG_PERF_RATING",2)). But i am getting the below syntax error while applying this.
    [nQSError:27002] Near <(>: Syntax error [nQSError:26012]
    i used the same condition on database and i got results.
    select rating_cd,
    to_char(round(orig_perf_rating,2)) as orig_perf_rating,
    headcount,
    fte,
    salary_annl,
    total_service_days
    from wc_perf_ratings_d, w_wrkfc_evt_month_f
    where wc_perf_ratings_d. rating_cd= to_char(round(w_wrkfc_evt_month_f.orig_perf_rating,2));
    Here i am trying to convert the format of ORIG_PERF_RATING field and trying to join with RATING_CD in physical layer.
    so my question is, is it possible to join in obiee using such conditions? i do not want to convert the field format in ETL and i wanted to do that at obiee level. Could anybody please suggest how to do it?

    Hi,
    Thanks for the link. i was trying another method in joining the two fields (rating_cd, orig_perf_rating) at database level. As i said both these fields are of varchar datatype. Earlier i tried to convert orig_perf_rating field from number to string. But i want to convert string data in (rating_cd) to number. so i have values in it like (1,2,3,4,bep, ep, vp, nr). Actually i have a requirement to create a database view for the dimension table (wc_perf_ratings_d) having rating_cd field with new datatype (number) instead of string and then join this field with orig_perf_rating in fact table which already has numbers. Along with the number rating fields, there are four particular string fields in rating_cd (bep, ep, vp, nr) as i mentioned. so while converting, i also have to convert these specific string fields in to partcular numbers. (bep=1, ep=5, nr=0, vp=3). Now, all this requirement is at database level as i am creating a db view. so my question is how to apply the to_number conversion function for converting the rating_cd field values (bep, ep, vp, nr) from string to specific mentioned numbers. Could you please help?
    for eg: create or replace view as select to_number(rating_cd....................
    Thank you.

  • Expand the Warehouse tables X adding physical tables and joins

    I was questioned about these options:
    a) Add physical tables (from other DBs) and modifying the physical model with new joins + modifying logical model to include new columns
    b) Expand warehouse tables to include new columns in the tables. Doing a complementar ETL in order to feed the additional columns with data. Logical model would be updated but the idea in this option is to avoid joins in the physical layer.
    My understanding is that option a) despite the joins in the physical layer would be a better strategy.
    Pls. I'd appreciate any comments on the performance side or in the amount of effort to create the complementar ETL...
    Txs.
    Antonio

    Hi Lombo,
    I am not sure about what you mean with option a. In my understanding, you are asking for a comparison between:
    a) adding data from an additional source in the RPD:
    This means that you have to create an additional data source in the physical layer. I do not think you can create relations between physical tables sourced from different data sources in your physical layer. This means that the data is related in the logical layer. By doing this, the BI Server has to join the data from both sources for each front-end request. Also, it will have to perform the last aggregations instead of shipping it to the database. This is a big performance hit in terms of response time.
    However! It can be much and much faster in terms of development effort required to support this. If it's only a limited number of columns and tables being added, this may be a valid option. You can also use this approach to prototype option b.
    b) adding data from an additional source using ETL:
    Quite some work in terms of development effort. However, all complexity and performance hits are moved to the bottom of the stack and process: database and ETL. You will need to create an additional data source in the DAC, additional custom folder(s) in Informatica and the ETL logic to update the existing tables and/or load new tables.
    Additionaly, you need to expand the RPD with the new columns / tables, just as you would do in option a. However, now it will be sourced from the same datasource: less work for the BI Server to deliver the dataset to satisfy the request.
    So basically, I would consider:
    - How does the system currently perform: response times in the front end ( bad -> go for option b )
    - How many columns will be added and how often will they be used in the front-end
    - Informatica / DAC / OBI expertise available
    - How long do you have to facilitate reporting on data from the 2nd source
    Good luck!

  • Using multiple physical tables in a single logical dimension table

    I have two physical tables that are related on a 1 to 1 basis based on a natural key. One of these tables is already part of my RPD file (actually, it is the W_EMPLOYEE_D from the Oracle BI Applications). The second table contains additional employee attributes from a custom table added to the data warehouse. Unfortunately, I don't seem to be able to display ANY data from this newly added custom table! I'm running on OBIEE 11.1.1.6.
    Here's what I've tried to do. Lets call the original table E1 and the new one E2. E1 is part of the repository already, and has functioned perfectly for years.
    - In my physical model, I have imported E2 and defined the join between E1 and E2.
    - In my logical table for E1, I've mapped E2 to E1 (E2 appears as a source), set up an INNER JOIN in the joins section for E1 and added the attributes from E2 in the folder
    - In the SOURCES for this logical table, I've set the logical level of the content for E2 appropriately (detail level, same as E1)
    - In my presentation folder for E1, I've copied in the attributes from E2 that were included in my logical table
    Consistency check runs smoothly, no warnings or errors. Note: E2 contains hundreds of rows, all of which have matching records in E1.
    Now, when I create an analysis that includes only an attribute sourced from E2, I get a single row returned, with a NULL value. If I create an analysis that includes one attrribute from E1 and one from E2, I get all the valid E1 records with data showing, but with NULL for the E2 attributes. Remember, I have an inner join, which means that the query is "seeing" E2 data, it is just choosing not to show it to me! Additionally, I can grab the query from the NQQuery.log file - when I run this SQL in SQL*Developer, I get PERFECT results - both E1 and E2 attributes show up in the SQL - the query engine is generating valid SQL. The log file does not indicate there are any errors either; it does show the correct number of rows being added to cache. If I create a report that includes attributes from E1, E2 and associated fact metrics I get similar results. The reports seem to run fine, but all my E2 attributes are NULL in Answers. I've verified basics, like data types, etc. and when I "Query Related Objects" in the repository, everything looks consistent across all 3 layers and all objects. E2 is located in the same (Oracle) database and schema as E1, and there are no security constraints in effect.
    I've experimented with a lot of different things without success, but I expected that the above configuration should have worked. Note that I cannot set up E2 as a new separate dimension, as it does not contain the key value used to join to the facts, nor do the facts contain the natural key that is in both E1 and E2.
    Sorry for the long post - just trying to head off some of the questions you might have.
    Any ideas welcomed! Many thanks!
    Eric

    Hi Eric,
    I would like you to re-check on the content level settings here as they are the primary causes of this kind of behavior. You could notice that the same information might have written down in the logical plan of the query too.
    Also, as per your description
    "In the SOURCES for this logical table, I've set the logical level of the content for E2 appropriately (detail level, same as E1)"
    I would like to check on this point again, as if you had mapped E2 to E1 in the same logical source with an inner join, you would get to set the content level at E1 levels themselves but not E2 (Now, that E2 would become a part of the E1 hierarchy too). This might be the reason, the BI Server is choosing to elimiate(null) the values from E2 too (even you could see them in the sql client)
    Hope this helps.
    Thank you,
    Dhar

  • "Select" Physical table as LTS for a Fact table

    Hi,
    I am very new to OBIEE, still in the learning phase.
    Scenario 1:
    I have a "Select" Physical table which is joined (inner join) to a Fact table in the Physical layer. I have other dimensions joined to this fact table.
    In BMM, I created a logical table for the fact table with 2 Logical Table Sources (the fact table & the select physical table). No errors in the consistency check.
    When I create an analysis with columns from the fact table and the select table, I don't see any data for the select table column.
    Scenario 2:
    In this scenario, I created an inner join between "Select" physical table and a Dimension table instead of the Fact table.
    In BMM, I created a logical table for the dimension table with 2 Logical Table Sources (the dimension table & the select physical table). No errors in the consistency check.
    When I create an analysis with columns from the dimension table and the select table, I see data for all the columns.
    What am I missing here? Why is it not working in first scenario?
    Any help is greatly appreciated.
    Thanks,
    SP

    Hi,
    If I understand your description correctly, then your materialized view skips some dimensions (infrequent ones). However, when you reference these skipped dimensions in filters, the queries are hitting the materialized view and failing as these values do not exist. In this case, you could resolve it as follows
    1. Create dimensional hierarchies for all dimensions.
    2. In the fact table's logical sources set the content tabs properly. (Yes, I think this is it).
    When you skipped some dimensions, the grain of the new fact source (the materialized view in this case) is changed. For example:
    Say a fact is available with the keys for Product, Customer, Promotion dimensions. The grain for this is Product * Customer * Promotion
    Say another fact is available with the keys for Product, Customer. The grain for this is Product * Customer (In fact, I would say it is Product * Customer * Promotion Total).
    So in the second case, the grain of the table is changed. So setting appropriate content levels for these sources would automatically switch the sources.
    So, I request you to try these settings and let me know if it works.
    Thank you,
    Dhar

  • Problem: 1 physical table -- multiple logical table sources

    Hi,
    I'm quite new to BIEE and setting up my repository.
    So I have a question, if the following scenario is possible:
    Physical Layer: TABLE_A: COL_A, COL_B, COL_C
    TABLE_B: COL_D, COL_E, COL_F
    Join TABLE_A.COL_A = TABLE_B.COL_D
    In Business Model I have a Dimension Table with TABLE_A as datasource with fields DIM1 (COL_B).
    The Fact Table (MEASURE) would have twice TABLE_B as data source with different where-clauses on COL_F and logical table columns (ATT1 and ATT2) of value COL_E.
    So far I have created everything and the consistency check shows no errors or warnings, but I get an error in Answer: Incorrectly defined logical table source (for fact table MEASURE) does not contain mapping for [MEASURE.ATT1, MEASURE.ATT2], when I creating an report showing DIM1, ATT1, ATT2.
    Isn't it possible to have one physical column used as multiple data source?
    I know it 's working, when I create the physical table twice ... but maybe there's a solution for business model.
    Thanks
    chrissy

    Hi mengesh,
    that's what I also tried, but it's always returning me the same error.
    I know it would work, when I import the physical table twice or more, but that's not what I want to do, because at the end I have 10 or more fields based on this one physical table. There's one field indicating what value is contained in the record, this means:
    COL_F | COL_E
    1 | customer name
    2 | customer number
    3 | customer branche
    4 | salesman
    5 | date
    6 | report number
    etc.
    I don't think it's usefull to import the physical table as often as I need this field. So I want to divide it in business model.
    thanks
    chrissy

  • Duplicate Physical Tables

    Hi,
    I am trying to import two physical tables in the repository, both tables are the same as I would like to do a self join amongst the two tables and a fact table.
    I am finding that I am only able to view data from one table imported, I am then copying this table but when I try to view the data behind the copied table. I get an error message 'table or view does not exist'
    Below are the tables:
    Customer_Dim - Original imported table(can view data)
    Customer_Dim#1 - Copied above table (unable to view data)
    Is this the normal behaviour?
    Can anyone help?
    Thanks

    Hi,
    Create Alias table for Customer_Dim, Dont Copy and paste.
    Right click your Customer_Dim -> New Object -> alias -> give different name for the alias table.
    Now you can view the data for Alias table.
    Thanks,
    Balaa...

  • "Factless" fact table join warning

    We have a physical fact table that only contains keys for a given day in order to track changes to Employment history over time. The compensation "facts" are contained in a dimension that is joined to the factless physical table in the logical layer. I use the factless fact as the table source and inner join it to the dimension to create the logical fact source. This seems to be working fine, but is creating a warning when I check consistency: "There are physical table sources mapped in Logical Table Source [...] that are not used in any column mappings or expressions." Is there a better way to model this in the logical layer that this warning is trying to alert me to?
    Thanks!

    Thank you,
    The error does go away when I insert into the logical fact table the key from the factless physical table. The error is gone whether or not I define the physical column as the Logical fact table key. Will defining it as a logical key change anything?
    I am curious as to why it would want me to include a field in the fact table that will never be used in the presentation layer. Is the warning mainly for accidental inclusion of physical tables that aren't used?
    Thank you for the response. Nice blog by the way.

  • Using case when statement in the select query to create physical table

    Hello,
    I have a requirement where in I have to execute a case when statement with a session variable while creating a physical table using a select query. let me explain with an example.
    I have a physical table based on a select table with one column.
    SELECT 'VALUEOF(NQ_SESSION.NAME_PARAMETER)' AS NAME_PARAMETER FROM DUAL. Let me call this table as the NAME_PARAMETER table.
    I also have a customer table.
    In my dashboard that has two pages, Page 1 contains a table with the customer table with column navigation to my second dashboard page.
    In my second dashboard page I created a dashboard report based on NAME_PARAMETER table and a prompt based on customer table that sets the NAME_ PARAMETER request variable.
    EXECUTION
    When i click on a particular customer, the prompt sets the variable NAME_PARAMETER and the NAME_PARAMETER table shows the appropriate customer.
    everything works as expected. YE!!
    Now i created another table called NAME_PARAMETER1 with a little modification to the earlier table. the query is as follows.
    SELECT CASE WHEN 'VALUEOF(NQ_SESSION.NAME_PARAMETER)'='Customer 1' THEN 'TEST_MART1' ELSE TEST_MART2' END AS NAME_PARAMETER
    FROM DUAL
    Now I pull in this table into the second dashboard page along with the NAME_PARAMETER table report.
    surprisingly, NAME_PARAMETER table report executes as is, but the other report based on the NAME_PARAMETER1 table fails with the following error.
    Error Codes: OPR4ONWY:U9IM8TAC:OI2DL65P
    State: HY000. Code: 10058. [NQODBC] [SQL_STATE: HY000] [nQSError: 10058] A general error has occurred. [nQSError: 16001] ODBC error state: S1000 code: 1756 message: [Oracle][ODBC][Ora]ORA-01756: quoted string not properly terminated. [nQSError: 16014] SQL statement preparation failed. (HY000)
    SQL Issued: SET VARIABLE NAME_PARAMETER='Novartis';SELECT NAME_PARAMETER.NAME_PARAMETER saw_0 FROM POC_ONE_DOT_TWO ORDER BY saw_0
    If anyone has any explanation to this error and how we can achieve the same, please help.
    Thanks.

    Hello,
    Updates :) sorry.. the error was a stupid one.. I resolved and I got stuck at my next step.
    I am creating a physical table using a select query. But I am trying to obtain the name of the table dynamically.
    Here is what I am trying to do. the select query of the physical table is as follows.
    SELECT CUSTOMER_ID AS CUSTOMER_ID, CUSTOMER_NAME AS CUSTOMER_NAME FROM 'VALUEOF(NQ_SESSION.SCHEMA_NAME)'.CUSTOMER.
    The idea behind this is to obtain the data from the same table from different schemas dynamically based on what a session variable. Please let me know if there is a way to achieve this, if not please let me know if this can be achieved in any other method in OBIEE.
    Thanks.

  • OBIEE generated SQL differs if it's a Physical Table or Select Table...

    Hi!
    I have some tables defined in the Physical Layer, which some are Physical Tables and others are OBIEE "views" (tables created with a Select clause).
    My problem is that the difference in the generated SQL for the same table, differs (as expected) whether it is a Physical Table or a "Select Table". And this difference originates problems in the returned data. When it a Physical Table, the final report returns the correct data, but when it is a Select Table it returns incorrect/incomplete data. The report joins this table with another table from a different Database (it is a join between Sybase IQ and SQL Server).
    This is the generated SQL in the log:
    -- Physical Table generated SQL
    select T182880."sbl_cust_acct_row_id" as c1,
    T182880."sbl_cust_acct_ext_key" as c2,
    T182880."sbl_cust_source_sys" as c3
    from
    "SGC_X_KEY_ACCOUNT" T182880
    order by c2, c3
    -- "Select Table" generated SQL
    select
         sbl_cust_acct_ext_key,
         ltrim(rtrim(sbl_cust_source_sys)) as sbl_cust_source_sys,
         sbl_cust_acct_row_id,
         sbl_cust_acct_camp_contact_row_id,
         ods_date,
         ods_batch_no,
         ods_timestamp
    from dbo.SGC_X_KEY_ACCOUNT
    As you may notice, the main difference is the use of Aliases (which I think that it has no influence in the report result) and the use of "Order By" (which I start to think that it its the main cause to return the correct data).
    Don't forget that OBIEE server is joining the data from this table, with data from another table from a differente database. Therefore, the join is made in memory (OBIEE Engine). Maybe in the OBIEE Engine the Order by is essential to guarantee a correct join...but then again, I have some other tables in the Physical Layer that are defined as "Select" and the generated SQL uses the aliases and the Order by clause...
    In order to solve my problem, I had to transform the "Select Table" into a "Physical Table". The reason it was defined as a "Select Table" was because it had a restriction in the Where Clause (which I eliminated already, althouth the performance wil be worse).
    I'm confused. Help!
    Thanks.
    FPG

    Hi FPG,
    Not sure if this is a potential issue for you at all, but I know it caused me all kinds of headaches before I figured it out. Had to do with "Features" tab Values in the database object's settings in the Physical Layer:
    Different SQL generated for physical table query vs. view object query?
    Mine had to do with SQL from View objects not being submitted as I would expect, sounds like yours has more to do with "Order By"? I believe I remembered seeing some Order By and Group By settings in the "Features" list. You might make a copy of your RPD and experiement around with setting some of those if they aren't already selected and retesting your queries with the new DB settings.
    Jeremy

  • Creating Physical table as select in the .rpd

    When creating a physical table as a select we can write the SQL statement like SELECT A,B,C... FROM xxx etc.
    Trouble is that we then have to go the tab COLUMNS and define each column... This is kind of very error prone and lots of work...
    Pls. is there a way to somehow leverage the columns alreaday defined in the PHYSICAL TABLE ?
    Txs. for any help.
    Antonio

    Hi Antonio,
    You didnt catch me..
    1)Open actual rpd and create a view using select query, do not add any columns.
    2) Create view in dev db.
    3) Create a new rpd file import the view into the physical layer.
    4) Select columns from step3 and copy
    5) paste in the actual rpd on selecting the 'select query' object
    you are good to go.
    This wold help you with all columns and their datatypes.
    Hope this helps.
    Cheers,
    SVee

  • Oracle 11gr2 ODBC - error updating linked table (Ora 01722 and 01461)

    Good day folks,
    My shop has just moved to 11gR2 client and server. We were previously using 11gR1 with no issues (and before that, 10, 9, 8, etc). After moving from 11r1 to 11r2, we began getting errors from some of our MS Access ODBC applications with linked Oracle tables. The error would occur when executing an UPDATE statement that had a table join in it. Here is a simple example:
    UPDATE TableX SET TableX.Fieldx = “valuex” WHERE TableX.Fieldx = TableZZZ.Fieldx AND TableZZZ.fieldzzz is not null
    Currently, after moving to 11r2 client, an update query like the one above will error out in one of the following ways:
    - odbc -- update on a linked table failed - Ora 01722 invalid number
    - ORA-01461: can bind a LONG value only for insert into a LONG column
    - Or it will say that the records were not updated because they are locked.
    In some cases, I have noticed some records being updated that were not supposed to be updated.. records that the where clause was meant to exclude. That is very unsettling.
    I understand that perhaps an update statement shouldn’t be joining table and perhaps it should be done over a couple calls, but the reality is – this code is out there in abundance and if there is a solution that doesn’t amount to my changing all this code or reverting to 11gR1, I would love to find it.
    Since the query runs fine using SQL Plus and also runs fine if I run it against a local table in Access rather than a linked Oracle table – I figured the issue was possibly with the Oracle 11r2 ODBC driver. So, I switched the Oracle ODBC driver (sqora32.dll version 11.2.0.1 with version 11.1.0.7), and the problem went away.
    I believe this verifies the issue resides with Oracle ODBC version 11.2.0.1. Can anyone help? I'm assuming it's not particularly wise to simply swap sqora32.dll files on all my clients machines, so I am searching for an actual solution here instead.
    I also did performed ODBC tracing to see what Access is handing to the Oracle ODBC driver. I then used database or SQLNet tracing to see what the ODBC driver was handing off to SQLNet/database.
    The results are in the following post:
    Thanks guys!!

    SQLNET TRACE
    If you want an Admin level trace, I can have one right away.
    (856) [13-JUN-2010 22:11:00:657] nsopen: opening transport...
    (856) [13-JUN-2010 22:11:00:657] nttcni: Tcp conn timeout = 60000 (ms)
    (856) [13-JUN-2010 22:11:00:657] nttcni: trying to connect to socket 1364.
    (856) [13-JUN-2010 22:11:00:688] nttcni: connected on ipaddr 142.139.221.62
    (856) [13-JUN-2010 22:11:00:688] nttcon: set TCP_NODELAY on 1364
    (856) [13-JUN-2010 22:11:00:688] nsopen: transport is open
    (856) [13-JUN-2010 22:11:00:688] nsnainit: inf->nsinfflg[0]: 0x61 inf->nsinfflg[1]: 0x61
    (856) [13-JUN-2010 22:11:00:688] nsopen: global context check-in (to slot 0) complete
    (856) [13-JUN-2010 22:11:00:688] nscon: doing connect handshake...
    (856) [13-JUN-2010 22:11:00:688] nscon: sending NSPTCN packet
    (856) [13-JUN-2010 22:11:00:688] nscon: sending 233 bytes connect data
    (856) [13-JUN-2010 22:11:00:688] nsdo: 233 bytes to NS buffer
    (856) [13-JUN-2010 22:11:00:719] nscon: got NSPTRS packet
    (856) [13-JUN-2010 22:11:00:719] nscon: sending NSPTCN packet
    (856) [13-JUN-2010 22:11:00:719] nscon: sending 233 bytes connect data
    (856) [13-JUN-2010 22:11:00:719] nsdo: 233 bytes to NS buffer
    (856) [13-JUN-2010 22:11:00:735] nscon: got NSPTAC packet
    (856) [13-JUN-2010 22:11:00:735] nscon: connect handshake is complete
    (856) [13-JUN-2010 22:11:00:735] nscon: nsctxinf[0]=0x61, [1]=0x21
    (856) [13-JUN-2010 22:11:00:735] nsnainconn: inf->nsinfflg[0]: 0x61 inf->nsinfflg[1]: 0x21
    (856) [13-JUN-2010 22:11:00:735] nsnasend: bytes to send: 158
    (856) [13-JUN-2010 22:11:00:735] nsdo: 158 bytes to NS buffer
    (856) [13-JUN-2010 22:11:00:735] nsnareceive: buffer address: 0x132c34 bytes wanted: 2048
    (856) [13-JUN-2010 22:11:00:735] nsnareceive: calling NS to receive 2048 bytes into address 0x132c34
    (856) [13-JUN-2010 22:11:00:766] nsdo: 153 bytes from NS buffer
    (856) [13-JUN-2010 22:11:00:766] nsnareceive: received 153 bytes
    (856) [13-JUN-2010 22:11:00:766] nsnareceive: no more data to receive - returning
    (856) [13-JUN-2010 22:11:00:766] nsnareceive: total bytes received: 153
    (856) [13-JUN-2010 22:11:01:063] nsnasend: bytes to send: 77
    (856) [13-JUN-2010 22:11:01:063] nsdo: 77 bytes to NS buffer
    (856) [13-JUN-2010 22:11:01:063] nsnareceive: buffer address: 0x132c34 bytes wanted: 2048
    (856) [13-JUN-2010 22:11:01:063] nsnareceive: calling NS to receive 2048 bytes into address 0x132c34
    (856) [13-JUN-2010 22:11:01:079] nsdo: 64 bytes from NS buffer
    (856) [13-JUN-2010 22:11:01:079] nsnareceive: received 64 bytes
    (856) [13-JUN-2010 22:11:01:079] nsnareceive: no more data to receive - returning
    (856) [13-JUN-2010 22:11:01:079] nsnareceive: total bytes received: 64
    (856) [13-JUN-2010 22:11:01:079] naun5authent: Authentication type is 0
    (856) [13-JUN-2010 22:11:01:079] nsnasend: bytes to send: 1862
    (856) [13-JUN-2010 22:11:01:079] nsdo: 1862 bytes to NS buffer
    (856) [13-JUN-2010 22:11:01:079] nsnareceive: buffer address: 0x132c34 bytes wanted: 2048
    (856) [13-JUN-2010 22:11:01:079] nsnareceive: calling NS to receive 2048 bytes into address 0x132c34
    (856) [13-JUN-2010 22:11:01:141] nsdo: 165 bytes from NS buffer
    (856) [13-JUN-2010 22:11:01:141] nsnareceive: received 165 bytes
    (856) [13-JUN-2010 22:11:01:141] nsnareceive: no more data to receive - returning
    (856) [13-JUN-2010 22:11:01:141] nsnareceive: total bytes received: 165
    (856) [13-JUN-2010 22:11:01:141] nsnasend: bytes to send: 33
    (856) [13-JUN-2010 22:11:01:141] nsdo: 33 bytes to NS buffer
    These lines are present using both version of sqora32.dll
    (856) [13-JUN-2010 22:11:01:141] nszgwop: SQLNET.WALLET_OVERRIDE not found, using default.
    (856) [13-JUN-2010 22:11:01:157] nscontrol: Vect I/O support: 0(856) [13-JUN-2010 22:11:01:391] nioqrc: Recieve: returning error: 3111
    (856) [13-JUN-2010 22:11:01:391] nsdo: sending NSPTMK packet
    (856) [13-JUN-2010 22:11:01:391] nserror: nsres: id=0, op=77, ns=12630, ns2=0; nt[0]=0, nt[1]=0, nt[2]=0; ora[0]=0, ora[1]=0, ora[2]=0
    These lines only happen when using the R2 version of sqora32.dll
    (856) [13-JUN-2010 22:11:01:719] nioqrc: Recieve: returning error: 3111
    (856) [13-JUN-2010 22:11:01:719] nsdo: sending NSPTMK packet
    (856) [13-JUN-2010 22:11:01:860] nserror: nsres: id=0, op=0, ns=12630, ns2=0; nt[0]=0, nt[1]=0, nt[2]=0; ora[0]=0, ora[1]=0, ora[2]=0
    (856) [13-JUN-2010 22:21:03:782] nstimarmed: no timer allocated

Maybe you are looking for

  • JInternalFrame size issue

    I'm trying to make an IDE, and have started trying to create a nopepad type editor. My problem occurs when clicking new to create a new internal frame, the frame is created to be the same size as a maximised internal frame would be. I've tryed adding

  • Phones and videos not in Camera Roll

    Hi Everyone... For some reason, when I take photos or videos, they aren't stored in my Camera Roll anywhere. I can take photos, and it makes the snap sound, but when I go to my camera roll, they aren't there. What I find wierd is, the last photo I ta

  • MSI Gaming GTX 970 4GD5 OC BIOS request

    Hello, my S/N: 602-V317-01SB1409073268 Current BIOS according to MSI Live Update : NV317MH.210 Is there any later versions for my card? Thanks

  • My iphoto crashes within a few seconds every time I open it. Can you help?

    My iphoto crashes within a couple of seconds every time I open it. Can you help?

  • How Jaws works?

    Does anybody know how Jaws works? What component will be read by Jaws when it's container becomes active windows? For example: JDialog Environment: Jaws 7.1, windows xp, jdk1.5 , JAB2.0.1 1. If dialog just contains a JLabel, nothing will be read. 2.