Updating row while reading resultset

hi,
what i would like to do is very simple:
while(rs.next()){
System.out.println(rs.getString(1));
rs.updateBoolean(3,true);
rs.updateRow();
seems ok to me.. ;-) the problem now is when i run this code snippet,
the row i've read before (getString(1)) is empty everywhere.. I would like to read non-destructively... what the hell am I doing wrong?

rs = resultset.
options i've set:
transactionisolation = TRANSACTION_READ_COMMITTED
statement args : TYPE_SCROLL_SENSITIVE, CONCUR_UPDATABLE

Similar Messages

  • There was an error while updating row count for "SH".."SH".CHANNELS

    Hi All,
    Am new to OBIEE.Pls help in this regard.
    Am building the physical layer as per Repository guide.When am importing the data in to this,am getting the below error.
    *"There was an error while updating row count for "SH".."SH".CHANNELS" :"*
    channles is the table name and having 5 rows.
    but am able to see the data in the sql prompt. like SELECT * FROM SH.CHANNELS then 5 rows data would be displaying..
    pls help in this regard and where is the excat problem?
    Thanks,

    what is the error?
    Make sure that your connection pool settings are okay..
    Make sure that, you are using correct driver in case of if you are using ODBC dsn..
    Also make sure that, your oracle server is running... TNS and Oracle server services

  • Obiee oracle gateway error while updating row count

    Hi ,
    OBIEE server 11.1.1.5 ,oracle server11g installed in linux 64bit,
    while updating row count in Admin tool i am getting the following error
    [NQODBC][SQL_STATE:HY000][nQSError:10058] A general error has occured.
    [nQSError: 43113]Message returned from OBIS.
    [nQSError:43093]An error occured while processing the EXECUTE PHYSICAL statement.
    [nQSError:17003]Oracle gateway error: OCIEnvNIsCreate or OCIEnvInit failed to initialize environment.Please check your Oracle Client installation and make sure the correct version of OCI libraries are in the library path.
    i am able to check the database from sqlplus it is working fine.
    Any suggestion highly appreciated plzzz

    Make sure your connection pool is valid and able to import or execute reports.
    If everything good as above said, in Physical layer database properties-> general tab choose the database version and try it once.
    If not
    Check the doc id 1271486.1
    Or
    To resolve the issue create a softlink (ln -s) in the <OracleBI>/server/Bin folder to link to the 32-bit Oracle Client Driver file.
    The example below shows how to perform a softlink from the 64 bit directory:
    cd /u10/app/orcladmin/oracle/OracleBI/server/Bin
    ln -s $ORACLE_HOME/lib32/libclntsh.so.10.1 libclntsh.so.10.1
    If helps mark

  • Timeout error while reading and updating in batches in a single transaction

    Problem:-
    In a transaction I do read and update to the database in batches. First batch it runs fine and for second batch it get hanged at
    sqlCommand.ExecuteReader(). The following is not my complete code but required bits of it which will give more information about the problem. At the end of this post please find the error log. Please help me.
    My guess:-
    The problem i see here is with the locks acquired while reading and update still remain when it comes for the second batch which is blocking the next read. But could not find a way to solve it.
    Get connection and open it.
    Begin Transaction.
    sqlUpdateTransaction = sqlUpdateConnection.BeginTransaction(String.Format("UpdateUsageDetailTransaction{0}", storageClassId))
    Get application lock.
    GetApplock
    const String sqlText = @"DECLARE @result int EXEC @result = sp_getapplock Resource=@resourceName,@LockMode='Exclusive',@LockOwner='Transaction',@LockTimeout=@timeout select @result";
    using (SqlCommand sqlCommand = sqlTransaction.Connection.CreateCommand())
    sqlCommand.CommandText = sqlText;
    sqlCommand.Parameters.AddWithValue("@resourceName", resourceName);
    sqlCommand.Parameters.AddWithValue("@timeout", milliSecondsTimeout);
    sqlCommand.CommandTimeout = secondsTimeout;
    sqlCommand.Transaction = sqlTransaction;
    Int32 lockResult = (Int32) sqlCommand.ExecuteScalar();
    Seek and read the range of records.
    using (var sqlReadConnection = new SqlConnection(_connectionString))
    sqlReadConnection.Open();
    SqlTransaction sqlTransaction = _sqlUpdateTransaction;
    _cdrList = CdrOps.FetchByrecordsIdRange(_yearMonth, firstSkid, firstSkid + count - 1, sqlReadConnection);
    sqlReadConnection.Close();
    return _cdrList.Count > 0;
    static public Dictionary FetchByrecordsIdRange(Int32 yearMonth, Int64 startCdrId, Int64 endCdrId, SqlConnection sqlConnection)
    Dictionary cdrList = new Dictionary();
    using (SqlCommand sqlCommand = sqlConnection.CreateCommand())
    sqlCommand.CommandText = "EXEC P_GetCDRData @yearMonth, @startCdrId, @endCdrID";
    sqlCommand.Parameters.AddWithValue("@yearMonth", yearMonth);
    sqlCommand.Parameters.AddWithValue("@startCdrId", startCdrId);
    sqlCommand.Parameters.AddWithValue("@endCdrID", endCdrId);
    sqlCommand.CommandTimeout = DbOps.TwoHourTimeoutValue;
    using (SqlDataReader sqlDataReader = sqlCommand.ExecuteReader())
    FetchrecordPieces(sqlDataReader, cdrList);
    return cdrList;
    Update the records to the list by using a loop Go and check if the number of records read is equal to the batch size then write and flush.
    update()
    _tollUpdatedList.Add((Toll) record);
    _legacyUpdateCount++;
    Dispose.
    Dipose()
    if (_sqlUpdateTransaction != null && _sqlUpdateTransaction.Connection != null)
    sqlUpdateTransaction.Rollback(String.Format("UpdateUsageDetailTransaction{0}", _storageClassId));
    _sqlUpdateTransaction.Dispose();
    _sqlUpdateTransaction = null;
    Commit.
    commit()
    if(_sqlUpdateTransaction != null)
    _sqlUpdateTransaction.Commit();
    _sqlUpdateTransaction.Dispose();
    _sqlUpdateTransaction = null;
    Error log.
    Error: [0x80004005] MonthlyFileDb::Seek - Timeout expired. The timeout period elapsed prior to completion of the operation or the server is not responding.

    Thank you, I used beta_lockinfo and observed the following
    spid=59/0/2
    command = SELECT  
    appl=.Net SqlClient Data Provider
    hostprc= 3640
    dbname = DEV_ECAP_P_CAP_ENT_CMN
    prcstatus= SUSPENDED
    spid_ =
    59/0/2
    blklvl = 1
    blkby = 55
    rsctype = 
    locktype =
    lstatus =
    ownertype =
    rscsubtype =
    waittime = 785.139
    waittype = LCK_M_IS
    spid__ = 59/0/2
    nstlvl = 3
    inputbuffer = (@yearMonth int,@startCdrId bigint,@endCdrID bigint)EXEC P_GetCDRData @yearMonth, @startCdrId, @endCdrID
    current_sp = DEV_ECAP_P_CAP_ENT_CMN.dbo.P_GetCDRData
    spid=55
    command = NULL
    appl=.Net SqlClient Data Provider
    hostprc= 3640
    dbname = DEV_ECAP_P_CAP_ENT_CMN
    prcstatus= sleeping
    spid_ =  55
    blklvl = !!
    blkby = 
    rsctype = APPLICATION
    locktype = X
    lstatus = grant
    ownertype = transaction
    rscsubtype = 
    waittime = 
    waittype = 
    spid__ = 55
    nstlvl = 
    inputbuffer =UPDATE UsgDetailCommon SET RunId = t2.RunId FROM UsgDetailCommon t1 INNER JOIN #UsgDetailCommon_Update t2 ON t1.YearMonth = t2.YearMonth AND t1.CdrId = t2.CdrId ;DROP TABLE #UsgDetailCommon_Update
    current_sp = 
    But what is the solution for this...? I am googling and I found similar post but with no solution, here it is not allowing me to post html link 

  • Update rows in Database reading field in rows of selectoneListBox

    JDev 11.1.1.4, application JSF.
    I have a .jspx that contain a af:table with 20 rows. These rows are read from database.
    I have (in the same .jspx) a af:selectoneListBox with f:selectItems.
    I use drag and drop from table to selectoneListBox. I save into af:selectItems the first field of each row choice.
    Now, how can I put a button than when I press it, it makes a change in database (View Objects) passing as parameter the field reads in row of selectoneListBox.
    Thanks

    Thanks for your replay Kamaal,
    but I have thought another way.
    I'd like to write something like:
    update filea set field1='1' where field2 in (getSelectedItems())
    where getSelectedItems() contain the fields read in rows of selectoneListBox.
    But, I do not know where I have to write this sql statement and then how to execute it by pressing a button, maybe in the ViewObjectImpl ? and if yes, how?
    Thanks

  • Updated iPad2(2011) to iOS8 now iBooks crashes while reading a book. What is going on?

    Updated iPad2(2011) to iOS8. Now iBooks crashes while reading a book and closes down. I have sync'd the iPad a few times but nothing changes and it will 'crash' every 5-10 minutes. What is going on?

    A better solution. This worked for me and several others, from a post from MrsMike921 in the following thread:
    https://discussions.apple.com/message/26725105#26725105
    go to your settings, and first go to Safari and delete your history by scrolling down to "clear history and website data". Then go to the advanced settings and select ""website data". It should show 0 bytes. If not, clear again.
    next go to iTunes and App Store in settings, and touch your Apple ID, the select sign out.
    AFter you you sign out of your ID, restart your iPad by holding down the power button and home key simultaneously until you see the apple logo after the restart.
    when you are powered up, go to iBooks. You should get a message that you need to sign in to sync up your books. Sign in, and that should do it.

  • Updatable/scrollable java.sql.ResultSet + BC4J + Jdeveloper

    Hi ,
    Can any one help me out how to get Updatable/scrollable ResultSet in bc4j.
    I want to count to total no of row in my application . so i have written below code ,
    dbTrans = this.getDBTransaction();
    PreparedStatement prpmd = dbTrans.createPreparedStatement(query,DBTransaction.DEFAULT);
    ResultSet rst = prpmd.executeQuery();
    if(rst.next())
    if (rst.last())
    totalRecods = rst.getRow()
    but above code give the exception "Invalid operation for forward only resultset : last "

            String sql = "SELECT COUNT(*) FROM table";
            PreparedStatement prest = con.prepareStatement(sql);
            ResultSet rs = prest.executeQuery();
            while (rs.next()){
              records = rs.getInt(1);
            }not sure whether this will be feasible in your case.
    else
    Statement stmt = conn.createStatement(ResultSet.TYPE_SCROLL_INSENSITIVE); //creates scrollable resultset
    rs = stmt.executeQuery(sqlString);
    // Point to the last row in resultset.
    rs.last();
    // Get the row position which is also the number of rows in the ResultSet.
    int rowcount = rs.getRow();
    //position to first
    rs.beforeFirst();Note incase of 2nd style if the resultset has huge data then traversing to last will be a time consuming process.
    You can also consider using CachedRowSet http://onjava.com/pub/a/onjava/2004/06/23/cachedrowset.html based on your requirement.

  • Cursor and Update rows based on value/date

    SQL Server 2012
    Microsoft SQL Server Management Studio
    11.0.3128.0
    Microsoft Analysis Services Client Tools
    11.0.3128.0
    Microsoft Data Access Components (MDAC)
    6.1.7601.17514
    Microsoft MSXML 3.0 4.0 5.0 6.0 
    Microsoft Internet Explorer
    9.11.9600.16518
    Microsoft .NET Framework
    4.0.30319.18408
    Operating System
    6.1.7601
    The objective of this is to test the Cursor and use it on a production environment after this is fixed. What I would like to do is update rows in a column i duplicated originally called 'HiredDate' from AdventureWorks2012 HumanResources.Employee table. I
    made a duplicate column called 'DateToChange' and would like to change it based on a date I have picked, which returns normally 2 results (i.e. date is '04/07/2003'). The code runs but will not change both dates. It did run however with an error but changed
    only 1 of the 2 rows because it said ['nothing available in next fetch'].
    The code to add the columns and perform the query to get the results I am running this against:
    -- ADD column 'DateToChange'
    ALTER TABLE [HumanResources].[Employee] ADD DateToChange Date NOT NULL;
    -- Copy 'HireDate' data to 'DateToChange'
    UPDATE HumanResources.Employee SET DateToChange = HireDate;
    -- Change 'DateToChange' to NOT NULL
    ALTER TABLE [HumanResources].[Employee] ALTER COLUMN DateToChange Date NOT NULL;
    SELECT BusinessEntityID,HireDate, CONVERT( char(10),[DateToChange],101) AS [Formatted Hire Date]
    FROM HumanResources.Employee
    WHERE [DateToChange] = '04/07/2003';
    Code:
    USE AdventureWorks2012;
    GO
    -- Holds output of the CURSOR
    DECLARE @EmployeeID INT
    DECLARE @HiredDate DATETIME
    DECLARE @HiredModified DATETIME
    DECLARE @ChangeDateTo DATETIME
    --Declare cursor
    -- SCROLL CURSOR ALLOWS "for extra options" to pul multiple records: i.e. PRIOR, ABSOLUTE ##, RELATIVE ##
    DECLARE TestCursor CURSOR SCROLL FOR
    -- SELECT statement of what records going to be used by CURSOR
    -- Assign the query to the cursor.
    SELECT /*HumanResources.Employee.BusinessEntityID, HumanResources.Employee.HireDate,*/ CONVERT( char(10),[DateToChange],101) AS [Formatted Hire Date]
    FROM HumanResources.Employee
    WHERE DateToChange = '01/01/1901'
    /*ORDER BY HireDate DESC*/ FOR UPDATE OF [DateToChange];
    -- Initiate CURSOR and load records
    OPEN TestCursor
    -- Get first row from query
    FETCH NEXT FROM TestCursor
    INTO @HiredModified
    -- Logic to tell the Cursor while "@@FETCH_STATUS" 0 the cursor has successfully fetched the next record.
    WHILE (@@FETCH_STATUS = 0 AND @@CURSOR_ROWS = -1)
    BEGIN
    FETCH NEXT FROM TestCursor
    IF (@HiredModified = '04/07/2003')/*05/18/2006*/
    -- Sets @HiredModifiedDate data to use for the change
    SELECT @ChangeDateTo = '01/01/1901'
    UPDATE HumanResources.Employee
    SET [DateToChange] = @ChangeDateTo --'01/01/1901'
    FROM HumanResources.Employee
    WHERE CURRENT OF TestCursor;
    END
    -- CLOSE CURSOR
    CLOSE TestCursor;
    -- Remove any references held by cursor
    DEALLOCATE TestCursor;
    GO
    This query is run successfully but it does not produce the desired results to change the dates
    04/07/2003 to 01/01/1901.
    I would like the query to essentially be able to run the initial select statement, and then update and iterate through the returned results while replacing the necessary column in each row.
    I am also open to changes or a different design all together. 
    For this query I need:
    1. To narrow the initial set of information
    2. Check if the information returned, in particular a date, is before [i.e. this current month minus 12 months or
    12 months before current month]
    3. Next replace the dates with the needed date
    [Haven't written this out yet but it will need to be done]
    4. After all this is done I will then need to update a column on each row:
    if the 'date' is within 12 months to 12 months from the date checked
    NOTE: I am new to TSQL and have only been doing this for a few days, but I will understand or read up on what is explained if given enough information. Thank you in advance for anyone who may be able to help.

    The first thing you need to do is forget about cursors.  Those are rarely needed.  Instead you need to learn the basics of the tsql language and how to work with data in sets.  For starters, your looping logic is incorrect.  You open
    the cursur and immediately fetch the first row.  You enter the loop and the first thing in the loop does what?  Fetches another row.  That means you have "lost" the values from the first row fetched.  You also do not test the success of
    that fetch but immediately try to use the fetched value.  In addition, your cursor includes the condition "DateToChange = '01/01/1901' " - by extension you only select rows where HireDate is Jan 1 1901.  So the value fetched into @HiredModified will
    never be anything different - it will always be Jan 1 1901.  The IF logic inside your loop will always evaluate to FALSE.  
    But forget all that.  In words, tell us what you are trying to do.  It seems that you intend to add a new column to a table - one that is not null (ultimately) and is set to a particular value based on some criteria.  Since you intend the
    column to be not null, it is simpler to just add the column as not null with a default.  Because you are adding the column, the assumption is that you need to set the appropriate value for EVERY row in the table so the actual default value can be anything.
     Given the bogosity of the 1/1/1901 value, why not use this as your default and then set the column based on the Hiredate afterwards.  Simply follow the alter table statement with an update statement.  I don't really understand what your logic
    or goal is, but perhaps that will come with a better description.  In short: 
    alter table xxx add DateToChange date default '19010101'
    update xxx set DateToChange = HireDate where [some unclear condition]
    Lastly, you should consider wrapping everything you do in a transaction so that you recover from any errors.  In a production system, you should consider making a backup immediately before you do anything - strongly consider and have a good reason not
    to do so if that is your choice (and have a recovery plan just in case). 

  • Counting Rows from a ResultSet

    Hi!
    If I retreive some information from a database, say by using:
    ResultSet RS = Stmt.executeQuery("select * from events");How can I get a value for the amount of rows in this ResultSet?
    Thanks!
    Alastair

    Almost everyone, when they first use ResultSet thinks of it as a container with the data rows already in there, and wonders why there's no size method.
    But that's not how they work. A ResultSet, generally, holds only one row at a time. next requests the next row from the database and before next is called the ResultSet doesn't know if another row is going to be available. Indeed the number of rows that qualify for return in the result set may actually change while you are processing it.
    Doing the SELECT COUNT(*) request asks the database for the number of rows that match any criteria in the SELECT, but that number may have changed when you actually retrieve the rows. Someone else may have added or removed rows.

  • Number of rows in a ResultSet

    DB: Release 8.1.5.0.0
    Server: DEC Alpha OSF1 V4.0F
    Driver: thin, version?
    We're using JDBC to access Oracle tables from classes loaded into the database with loadjava. I can execute a statement and get a ResultSet, but I haven't been able to find a way to get the number of rows in the ResultSet (since Oracle's implementation of JDBC doesn't include the getFetchSize method of ResultSet).
    Is there another way to get the size of a ResultSet? If not, will Oracle be adding the getFetchSize method to their implementation soon?
    Thanks.
    null

    > > Try ResultSet rs = statment.executeQuery(...);
    if(rs
    == null){ //result set is empty}That is incorrect... it should have read:
    ResultSet rs = statment.executeQuery(...);
    if( ! rs.next() )
    //result set is empty
    you're right!! copy/paste from executeUpdate( ) api documentation about statements....
    "a ResultSet object that contains the data produced by the given query; never null"
    here's the catch though: that will advance your result set, which will throw off your cursor in a while loop (used to parse the results). requires you to reset the cursor before processing the result set.

  • Determining Number of Rows in a ResultSet

    Hi,
    Is there an easy way to determine the number of rows in a result set with TYPE_FORWARD_ONLY?

    > > Try ResultSet rs = statment.executeQuery(...);
    if(rs
    == null){ //result set is empty}That is incorrect... it should have read:
    ResultSet rs = statment.executeQuery(...);
    if( ! rs.next() )
    //result set is empty
    you're right!! copy/paste from executeUpdate( ) api documentation about statements....
    "a ResultSet object that contains the data produced by the given query; never null"
    here's the catch though: that will advance your result set, which will throw off your cursor in a while loop (used to parse the results). requires you to reset the cursor before processing the result set.

  • Jdev Database Adapter - polling updated rows

    Hello, I have a question reguarding the polling strategy available in the database adapter.
    I set it up and it works great with new inserted rows in the table.
    However, it doesn´t capture the updated rows!
    For instance, i have the following table:
    ID - NAME - AGE
    1 -John- 21
    2 -Mary- 25
    When I insert a new row, it is captured by comparing the last captured ID in the sequencing file.
    ID - NAME - AGE
    1 -John- 21
    2 -Mary- 25
    3 -Cindy- 20 <--------- New row
    But when i UPDATE an already existing row, it doesnt load the changed row!
    ID - NAME - AGE
    1 -John- 26 <----------- Age changed, but polling doesnt recognize it!
    2 -Mary- 25
    3 -Cindy- 20
    Is there a way to get this to work? Should I set an special option? Thank you very much.
    Im using jdeveloper 11g.

    Hi John, it depends on which type of polling strategy you are using to poll the new/updated records. You must have the necessary previleges to add the special field and creat triggers or your db team must have.
    i.Physical delete polling strategy: Cannot capture the UPDATE operations on the table.This because when the adapter listens to the table, when ever a record is polled, that record is deleted after the polling process. After polling the records, if it is not deleted, then the adapter knows that it’s a updated one but here when a record is polled, it is deleted. So when an adapter encounters a record, it’s a new record to the adapter though the record is updated before the polling cycle). So physical delete cannot capture UPDATED records.
    ii. Logical delete polling strategy: The logical delete polling strategy updates a special field of the table after processing each row (updates the where clause at runtime to filter out processed rows).The status column must be marked as processed after polling and the read value must be provided while configuring the adapter. Modified where clause and post-read values are handled automatically by the db adapter.
    Usage:
    <operation name="receive">
    <jca:operation
    ActivationSpec="oracle.tip.adapter.db.DBActivationSpec"
    PollingStrategyName="LogicalDeletePollingStrategy"
    MarkReadField="STATUS"
    MarkReadValue="PROCESSED"
    This polling strategy captures the updated records only if triggers are added.This is because when the record is polled, its status is updated to 'PROCESSED'. To capture this record if its updated, its status has to be 'UNPROCESSED'. So a trigger has to be added to update the status field to 'UNPROCESSED' whenever the record is updated. Below is the example.
    Ex:
    CREATE OR REPLACE TRIGGER nameOftrigger_modified
    BEFORE UPDATE ON table_name
    REFERENCING NEW AS modifiedRow
    FOR EACH ROW
    BEGIN
    *:modifiedRow.STATUS :='UNPROCESSED';*
    END;
    In this example, STATUS is the special field(of the polling table). When the record is updated, the trigger gets fired and updates the STATUS field to 'UNPROCESSED'. So when the table is polled, as this record's status is Unprocessed, this record ll be captured during polling.
    Other polling strategeis like "Sequencing Table Last Updated","Sequencing Table Last-Read Id" Polling strategy,etc also can be used to capture the updated records. In these strategies also, you need to add the triggers like the above. These need an extra helping table also to poll.
    Logical delete polling strategy is good enough to poll the updated records.
    Hope this helps.
    Thank you.

  • Update Row into Run Table Task is not executing in correct sequence in DAC

    Update Row into Run Table Task is not executing in correct sequence in DAC.
    The task phase for this task is "Post Lost" . The depth in the execution plan is 19 but this task is running some times in Depth 12, some times in 14 and some time in Depth 16. Would like to know is this sequence of execution is correct order or not? In the out of the Box this task is executed at the end of the entire load. No Errors were reported in DAC log.
    Please let me know if any documents that would highlight this issue
    rm

    Update into Run table is a task thats required to update a table called W_ETL_RUN_S. The whole intention of this table is to keep the poor mans run history on the warehouse itself. The actual run history is stored in the DAC runtime tables, however the DAC repository could be on some other database/schema other than warehouse. Its mostly a legacy table, thats being carried around. If one were to pay close attention to this task, it has phase dependencies defined that dictate when this task should run.
    Apologies in advance for a lengthy post.... But sure might help understanding how DAC behaves! And is going to be essential for you to find issues at hand.
    The dependency generation in DAC follows the following rules of thumb!
    - Considers the Source table target table definitions of the tasks. With this information the tasks that write to a table take precedence over the tasks that reads from a table.
    - Considers the phase information. With this information, it will be able to resolve some of the conflicts. Should multiple tasks write to the same table, the phase is used to appropriately stagger them.
    - Considers the truncate table option. Should there be multiple tasks that write to the same table with the same phase information, the task that truncates the table takes precedence.
    - When more than one task that needs to write to the table that have similar properties, DAC would stagger them. However if one feels that either they can all go in parallel, or a common truncate is desired prior to any of the tasks execution, one could use a task group.
    - Task group is also handy when you suspect the application logic dictates cyclical reads and writes. For example, Task 1 reads from A and writes to B. Task 2 reads from B and writes back to A. If these two tasks were to have different phases, DAC would be able to figure that out and order them accordingly. If not, for example those tasks need to be of the same phase for some reason, one could create a task group as well.
    Now that I described the behavior of how the dependency generation works, there may be some tasks that have no relevance to other tasks either as source tables or target tables. The update into run history is a classic example. The purpose of this task is to update the run information in the W_ETL_RUN_S with status 'Completed' with an end time stamp. Because this needs to run at the end, it has phase dependency defined on it. With this information DAC will be able to stagger the position of execution either before (Block) or after (Wait) all the tasks belonging to a particular phase is completed.
    Now a description about depth. While Depth gives an indication to the order of execution, its only an indication of how the tasks may be executed. Its a reflection of how the dependencies have been discovered. Let me explain with an example. The tasks that have no dependency will have a depth of 0. The tasks that depend on one or more or all of depth 0 get a depth of 1. The tasks that depend on one or more or all of depth 1 get a depth of 2. It also means implicitly a task of depth 2 will indirectly depend on a task of depth 0 through other tasks in depth 1. In essence the dependencies translate to an execution graph, and is different from the batch structures one usually thinks of when it comes to ETL execution.
    Because DAC does runtime optimization in the order in which tasks are executed, it may pick a task thats of order 1 over something else with an order of 0. The factors considered for picking the next best task to run depend on
    - The number of dependent tasks. For example, a task which has 10 dependents gets more priorty than the one whose dependents is 1.
    - If all else equal, it considers the number of source tables. For example a task having 10 source tables gets more priority than the one that has only two source tables.
    - If all else equal, it considers the average time taken by each of the tasks. The longer running ones will get more preference than the quick running ones
    - and many other factors!
    And of course the dependencies are honored through the execution. Unless all the predecessors of a task are in completed state a task does not get picked for execution.
    Another way to think of this depth concept : If one were to execute one task at a time, probably this is the order in which the tasks will be executed.
    The depth can change depending on the number of tasks identified for the execution plan.
    The immediate predecessors and successor can be a very valuable information to look at and should be used to validate the design. All predecessors and successors provide information to corroborate it even further. This can be accessed through clicking on any task and choosing the detail button. You will see all these information over there. As an alternate method, you could also use the 'All/immediate Predecessors' and 'All/immediate Successor' tabs that provide a flat view of the dependencies. Note that these tabs may have to retrieve a large amount of data, and hence will open in a query mode.
    SUMMARY: Irrespective of the depth, validate
    - if this task has 'Phase dependencies' that span all the ETL phases and has a 'Wait' option.
    - click on the particular task and verify if the task does not have any successors. And the predecessors include all the tasks from all the phases its supposed to wait for!
    Once you have inspected the above two you should be good to go, no matter what the depth says!
    Hope this helps!

  • Error: 1013231 Unable to update database while in readonly mode for backup

    Hi all,
    Wen im deleting the members for dimendsion , its giving error (hyperion 11.1 aso)
    Error: 1013231 Unable to update database while in readonly mode for backup , how can i solve this problem ,plz can any one help on this
    Thanks

    Has somebody set the database ready for archiving, maybe some maxl has been run and the db has not been returned from read only mode.
    Try running the following Maxl (change app.db to match your app/db)
    alter database app.db end archive;
    Cheers
    John
    http://john-goodwin.blogspot.com/

  • My MacPro is extremely slow after upgrading to Yosemite 10.10, even while reading my newspaper on PDF

    Problem description:
    My Mac Pro is extremely slow after upgrading to Yosemite 10.10, even while reading my newspaper on PDF
    This is my EtreCheck print
    EtreCheck version: 2.0.11 (98)
    Report generated 17. november 2014 kl. 10.49.28 CET
    Hardware Information: ℹ️
      MacBook Pro (13-inch, Mid 2012) (Verified)
      MacBook Pro - model: MacBookPro9,2
      1 2.5 GHz Intel Core i5 CPU: 2-core
      4 GB RAM Upgradeable
      BANK 0/DIMM0
      2 GB DDR3 1600 MHz ok
      BANK 1/DIMM0
      2 GB DDR3 1600 MHz ok
      Bluetooth: Good - Handoff/Airdrop2 supported
      Wireless:  en1: 802.11 a/b/g/n
    Video Information: ℹ️
      Intel HD Graphics 4000 -
      Color LCD 1280 x 800
    System Software: ℹ️
      OS X 10.10 (14A389) - Uptime: one day 15:45:13
    Disk Information: ℹ️
      TOSHIBA MK5065GSXF disk0 : (500,11 GB)
      S.M.A.R.T. Status: Verified
      EFI (disk0s1) <not mounted> : 210 MB
      Recovery HD (disk0s3) <not mounted>  [Recovery]: 650 MB
      Macintosh HD (disk1) /  [Startup]: 498.88 GB (290.89 GB free)
      Encrypted AES-XTS UnlockedConverting
      Core Storage: disk0s2 499.25 GB Online
      MATSHITADVD-R   UJ-8A8 
    USB Information: ℹ️
      Apple Inc. FaceTime HD Camera (Built-in)
      Apple Inc. Apple Internal Keyboard / Trackpad
      Apple Inc. BRCM20702 Hub
      Apple Inc. Bluetooth USB Host Controller
      Apple Computer, Inc. IR Receiver
    Thunderbolt Information: ℹ️
      Apple Inc. thunderbolt_bus
    Gatekeeper: ℹ️
      Mac App Store and identified developers
    Kernel Extensions: ℹ️
      /Library/Extensions
      [loaded] net.telestream.driver.TelestreamAudio (1.1.1 - SDK 10.8) Support
      /System/Library/Extensions
      [not loaded] com.devguru.driver.SamsungComposite (1.4.20 - SDK 10.6) Support
      /System/Library/Extensions/ssuddrv.kext/Contents/PlugIns
      [not loaded] com.devguru.driver.SamsungACMControl (1.4.20 - SDK 10.6) Support
      [not loaded] com.devguru.driver.SamsungACMData (1.4.20 - SDK 10.6) Support
      [not loaded] com.devguru.driver.SamsungMTP (1.4.20 - SDK 10.5) Support
      [not loaded] com.devguru.driver.SamsungSerial (1.4.20 - SDK 10.6) Support
    Launch Agents: ℹ️
      [loaded] com.divx.dms.agent.plist Support
      [loaded] com.divx.update.agent.plist Support
      [loaded] com.oracle.java.Java-Updater.plist Support
    Launch Daemons: ℹ️
      [failed] com.adobe.fpsaud.plist Support
      [loaded] com.microsoft.office.licensing.helper.plist Support
      [loaded] com.oracle.java.Helper-Tool.plist Support
      [loaded] com.oracle.java.JavaUpdateHelper.plist Support
      [invalid?] com.torch.update.agent.plist Support
    User Launch Agents: ℹ️
      [loaded] com.genieo.completer.download.plist Support
      [loaded] com.genieo.completer.ltvbit.plist Support
      [loaded] com.genieo.completer.update.plist Support
      [loaded] com.google.keystone.agent.plist Support
    User Login Items: ℹ️
      None
    Internet Plug-ins: ℹ️
      FlashPlayer-10.6: Version: 15.0.0.223 - SDK 10.6 Support
      QuickTime Plugin: Version: 7.7.3
      DivX Web Player: Version: 3.2.3.1164 - SDK 10.6 Support
      Flash Player: Version: 15.0.0.223 - SDK 10.6 Support
      OVSHelper: Version: 1.1 Support
      Default Browser: Version: 600 - SDK 10.10
      SharePointBrowserPlugin: Version: 14.4.4 - SDK 10.6 Support
      Unity Web Player: Version: UnityPlayer version 4.5.5f1 - SDK 10.6 Support
      Silverlight: Version: 5.1.20913.0 - SDK 10.6 Support
      JavaAppletPlugin: Version: Java 8 Update 25 Check version
    User Internet Plug-ins: ℹ️
      NPRoblox: Version: 1, 2, 8, 25 - SDK 10.10 Support
      Google Earth Web Plug-in: Version: 7.1 Support
    Safari Extensions: ℹ️
      Omnibar
    3rd Party Preference Panes: ℹ️
      Flash Player  Support
      Java  Support
      Perian  Support
    Time Machine: ℹ️
      Skip System Files: NO
      Auto backup: YES
      Volumes being backed up:
      Macintosh HD: Disk size: 498.88 GB Disk used: 207.99 GB
      Destinations:
      Elements [Local]
      Total size: 1.00 TB
      Total number of backups: 0
      Oldest backup: -
      Last backup: -
      Size of backup disk: Adequate
      Backup size 1.00 TB > (Disk used 207.99 GB X 3)
    Top Processes by CPU: ℹ️
          89% lsregister
          3% WindowServer
          1% mds
          1% sysmond
          0% mtmd
    Top Processes by Memory: ℹ️
      142 MB Preview
      125 MB com.apple.WebKit.WebContent
      125 MB Finder
      86 MB ocspd
      77 MB com.apple.WebKit.Plugin.64
    Virtual Memory Information: ℹ️
      335 MB Free RAM
      1.63 GB Active RAM
      1.32 GB Inactive RAM
      776 MB Wired RAM
      17.27 GB Page-ins
      345 MB Page-outs

    Check out this thread:  https://discussions.apple.com/thread/6466590?start=0&tstart=0.
    For many users, the Folder Action Dispatcher is the culprit that eats almost all available memory if it is enabled on ~/Library/LaunchAgents.

Maybe you are looking for