Indexing Question - Shorten Process?

It appears that when one searches for a keyword it indexes all the files in the folder/folders that are being searched. This can take awhile if there are lots of files.
If one adds a folder and then searches for the same keyword, it again goes through the whole indexing sequence again.
I use central cache, if I would check "export cache to folders" would that eliminate the constant indexing for searches?
I saved the search as a collection and then clicked on that folder, the Bridge then when through the whole indexing routine again, and I did not change a thing.
I have the box checked "search all non-indexed files" which probably is the culprit, but if not checked I can miss files.
Does anyone have a clue as to when Bridge needs to index, when to check the search non-indexed files, and how to reduce the indexing time?

Hi,
Try using 'INNER JOIN' as opposed to the 'FOR ALL ENTRIES' construct.
BR/
Mathew.
P.S.  This link could give some help with the coding.
http://help.sap.com/saphelp_nw70/helpdata/en/fc/eb39c4358411d1829f0000e829fbfe/content.htm
Edited by: Mathew Muthalaly on Apr 10, 2008 12:43 PM

Similar Messages

  • Questions on process chains

    Hai all,
             I have two questions on Process chains.
    1. I created a process chain for master data and a process chain for data loads(with ODS and cube). As the Master data has to be loaded first before loading transaction data, can I include the Master data process chain as a local process chain soon after the start process and then only after loading master data, the system proceeds to transaction data?
    2. I designed a process chain with the aggreagtion and compression of cube data. I have forgot to check the option delete overlapping requests in the infopackage to load the infocube. I ran the process chain and now there are 2 requests and the duplicate documents. I want to delete the recent request and check the deleting duplicate requests option in info package and re run. The problem is the system is not allowing me to delete the recent request.
    I appreciate any kind of help.
    Thanks.

    Hai Bhanu,
                Thanks for the reply. I am sceduling the request for deletion in the Manage infocube> but the request is not getting deleted. Everytime I click refresh, the delete icon disappears and the screen becomes as it was before scheduing the deletion. I checked after sometime, even then the same thing...
    I wonder the collpased requests can not be deleted???

  • Interview questions in process chain

    hi 
       cany any one send me possible interview questions in process chain and errors with answer.
    thanks in advance
    pradeep

    Hi Pradeep
    1.Procedure for repeat delta?
    You need to make the request status to Red in monitor screen and then delete it from ODS/Cube. Then when you open infopackage again, system will prompt you for repeat delta.
    also.....
    Goto RSA7->F2->Update Mode--->Delta Repetation
    Delta repeation is done based on type of upload you are carrying on.
    1. if you are loading masterdata then most of the time you will change the QM status to red and then repeat the delta for the repeat of delta. the delta is allowed only if you make the changes.
    and some times you need to do the RnD if the repeat of delta is not allowed even after the qm status id made to red. here you have to change the QM status to red.
    If this is not the case, the source system and therefore also the extractor, have not yet received any information regarding the last delta and you must set the request to GREEN in the monitor using a QM action.
    The system then requests a delta again since the last delta request has not yet occurred for the extractor.
    Afterwards, you must reset the old request that you previously set to GREEN to RED since it was incorrect and it would otherwise be requested as a data target by an ODS.
    Caution: If the termianted request was a REPEAT request itself, always set this to RED so that the system tries to carry out a repeat again.
    To determine whether a delta or a repeat are to be requested, the system ONLY uses the status of the monitor.
    It is irrelevant whether the request is updated in a data target somewhere.
    When activating requests in an ODS, the system checks delta repeat requests for completeness and the correct sequence.
    Each green delta/repeat request in the monitor that came from the same DataSource/source system combination must be updated in the ODS before activation, which means that in this case, you must set them back to RED in the monitor using a QM action when using the solution described above.
    If the source of the data is a DataMart, it is not just the DELTARNR field that is relevant (in the roosprmsc table in the system in which the source DataMart is, which is usually your BW system since it is a Myself extraction in this case), rather the status of the request tabstrip control is relevant as well.
    Therefore, after the last delta request has terminated, go to the administration of your data source and check whether the DataMart indicator is set for the request that you wanted to update last.
    If this is NOT the case, you must NOT request a repeat since the system would also retransfer the data of the last delta but one.
    This means, you must NOT start a delta InfoPackage which then would request a repeat because the monitor is still RED. For information about how to correct this problem, refer to the following section.
    For more information about this, see also Note 873401.
    Proceed as follows:
    Delete the rest of this request from ALL updated data targets, set the terminated request to GREEN IN THE MONITOR and request a new DELTA.
    Only if the DataMart indicator is set does the system carry out a repeat correctly and transfers only this data again.
    This means, that only in this case can you leave the monitor status as it is and restart the delta InfoPackage. Then this creates a repeat request
    In addition, you can generally also reset the DATAMART indicator and then work using a delta request after you have set the incorrect request to GREEN in the monitor.
    Simply start the delta InfoPackage after you have reset the DATAMART indicator AND after you have set the last request that was terminated to GREEN in the monitor.
    After the delta request has been carried out successfully, remember to reset the old incorrect request to RED since otherwise the problems mentioned above will occur when you activate the data in a target ODS.
    What is process chain and how you used it?
    A) Process chains are tool available in BW for Automation of upload of master data and transaction data while taking care of dependency between each processes.
    B) In one of our scenario we wanted to upload wholesale price infoobject which will have wholesale price for all the material. Then we wanted to load transaction data. While loading transaction data to populate wholesale price, there was a look up in the update rule on this InfoObject masterdata table. This dependency of first uploading masterdata and then uploading transaction data was done through the process chain.
    What is process chain and how you used it?
    A) We have used process chains to automate the delta loading process. Once you are finished with your design and testing you can automate the processes listed in RSPC. I have a real time example in the attachment.
    for more detail
    Collecting Process Chain Statistics
    /thread/235805 [original link is broken]
    Advice regarding process chains
    creation of process chains
    Message was edited by:
            Kalpana M

  • Bitmap Index Question

    I have a star schema database setup with a bunch of definition tables with 2-10 values in each. Most things I read say to use bitmap only in data warehousing, but in the same breath talk about tables exactly like I have them set up. So my question is, do bitmap indexes ever have a use outside of data warehouse? We don't do millions of transactions on hr, but it is an asset management front end run using php. So the main data tables is getting updated, inserted, and deleted during the day. I'd say on average we have about 30 users at any given time performing actions on the tables or pulling reports.
    On side note, but still related to indexes, is it better to have the indexes stored in different tablespace from the tables? If so, what is its effect?
    Thanks in advance.
    Setup:
    Oracle 11g running on Ubuntu 9.10 64bit

    ChaosAD wrote:
    I have a star schema database setup with a bunch of definition tables with 2-10 values in each. Most things I read say to use bitmap only in data warehousing, but in the same breath talk about tables exactly like I have them set up. So my question is, do bitmap indexes ever have a use outside of data warehouse? We don't do millions of transactions on hr, but it is an asset management front end run using php. So the main data tables is getting updated, inserted, and deleted during the day. I'd say on average we have about 30 users at any given time performing actions on the tables or pulling reports.
    Having STAR schema design for a transactional processing application seems bit strange (but what do I know...).
    Have you verified/validated that you definitely need bitmap indexes and B*Tree indexes will not serve the purpose? Just because it is a STAR schema does not necessarily mean one has to have bitmap indexes.
    If you expect 30 users (on average) to concurrently modify the data, I believe bitmap indexes is not the right choice as the DML actions will suffer from contention. Bitmap indexes negatively affect the concurrent multiple transactions.
    Rafi has answered your second question.

  • CTXSRV and automatic indexing question.

    We are running portal 3.0 production on Solaris and have documents uploaded on a regular basis.
    We are seeing that new documents are being picked up by CTXSRV for text indexing, But the ATTRIBUTES associated with the documents don't seem to be indexed........
    To make the ATTRIBUTES searchable, we have to drop and recreate the intermedia indexes.
    Is this how this works? Is there another way to make the ATTRIBUTES get "picked up".
    Will CTX_SCHEDULE help here?
    Any thoughts / help appreciated.
    null

    I am new to this field and I had a similar problem. I wanted to start the ctxsrv process to be able to make the inserted data available immediately for viewing. My question is what exactly does this command do? How long does it normally take to execute? Is there a way to test if the command has successfully completed?
    Radhika
    <BLOCKQUOTE><font size="1" face="Verdana, Arial">quote:</font><HR>Originally posted by alan barker ([email protected]):
    We have pdf's stored in the Sun os that are added to on a regular basis, the names of which appear immediately in the database. In order to apply the 'changes' to the intermedia indexes, the indexes can be rebuilt or we choose to use the ctxsrv process which will automatically apply the changes.
    os prompt> ctxsys -user ctxsys/ctxsys -personality m -log ctx.log
    With the latter, fragmentation will occur if performing deletes on this data. We only insert.
    I hope this helps!<HR></BLOCKQUOTE>
    null

  • Help required in Index Question

    Hi,
    This is an interview question that i faced.
    Theres a table Statistics having more than 300 columns. It has normal indexes on say five columns col1, col2, col3, col4, col5.
    Queries involving the first four columns give excellent performance but the query involving col5 gives poor performance.
    COL5 has around 5 million data and 1 normal index ix created on it and theres just one value in it say 'STATS'.(He said the query can be simple select queries or join queries, the queries may select all the rows of few rows may be 5%, 10% etc).
    The question was what could be the possible reason?
    I was unable to answer it but i think BITMAP index might be useful in this scenario.
    Any help or suggestions will be highly appreciated.

    Is it a good idea to creat a normal index on a column that have just one value as 'STATS' and the total no of rows is more than 5 million??
    No
    See the I/O
    SQL> CREATE TABLE bitmap_table AS SELECT ROWNUM rn FROM all_objects
      2  /
    Table created.
    SQL> ALTER TABLE bitmap_table ADD (status VARCHAR2(1))
      2  /
    Table altered.
    SQL> SELECT COUNT(*) FROM bitmap_table
      2  /
    COUNT(*)
        64177
    SQL> UPDATE bitmap_table SET status='Y'
      2  /
    64177 rows updated.
    SQL> COMMIT
      2  /
    Commit complete.
    SQL> CREATE TABLE btree_table AS SELECT * FROM bitmap_table
      2  /
    Table created.
    SQL> CREATE BITMAP INDEX bitmap_table_idx ON bitmap_table (status)
      2  /
    Index created.
    SQL> CREATE INDEX btree_table_idx ON btree_table (status)
      2  /
    Index created.
    SQL> EXEC DBMS_STATS.GATHER_TABLE_STATS('SCOTT','BITMAP_TABLE')
    PL/SQL procedure successfully completed.
    SQL> EXEC DBMS_STATS.GATHER_INDEX_STATS('SCOTT','BITMAP_TABLE_IDX')
    PL/SQL procedure successfully completed.
    SQL> EXEC DBMS_STATS.GATHER_TABLE_STATS('SCOTT','BTREE_TABLE')
    PL/SQL procedure successfully completed.
    SQL> EXEC DBMS_STATS.GATHER_INDEX_STATS('SCOTT','BTREE_TABLE_IDX')
    PL/SQL procedure successfully completed.
    SQL> SET AUTOTRACE TRACEONLY
    SQL> SELECT status
      2    FROM bitmap_table
      3   WHERE status='Y'
      4  /
    64177 rows selected.
    Execution Plan
       0      SELECT STATEMENT Optimizer=ALL_ROWS (Cost=2 Card=64177 Bytes
              =128354)
       1    0   BITMAP CONVERSION (TO ROWIDS) (Cost=2 Card=64177 Bytes=128
              354)
       2    1     BITMAP INDEX (FAST FULL SCAN) OF 'BITMAP_TABLE_IDX' (IND
              EX (BITMAP))
    Statistics
            171  recursive calls
              0  db block gets
             31  consistent gets
              0  physical reads
              0  redo size
         804546  bytes sent via SQL*Net to client
         475281  bytes received via SQL*Net from client
           4280  SQL*Net roundtrips to/from client
              4  sorts (memory)
              0  sorts (disk)
          64177  rows processed
    SQL> DROP INDEX BITMAP_TABLE_IDX
      2  /
    Index dropped.
    SQL> EXEC DBMS_STATS.GATHER_TABLE_STATS('SCOTT','BITMAP_TABLE')
    PL/SQL procedure successfully completed.
    SQL> SET AUTOTRACE TRACEONLY
    SQL> SELECT status
      2    FROM bitmap_table
      3   WHERE status='Y'
      4  /
    64177 rows selected.
    Execution Plan
       0      SELECT STATEMENT Optimizer=ALL_ROWS (Cost=26 Card=64202 Byte
              s=128404)
       1    0   TABLE ACCESS (FULL) OF 'BITMAP_TABLE' (TABLE) (Cost=26 Car
              d=64202 Bytes=128404)
    Statistics
            192  recursive calls
              0  db block gets
           4405  consistent gets
              0  physical reads
              0  redo size
         804546  bytes sent via SQL*Net to client
         475281  bytes received via SQL*Net from client
           4280  SQL*Net roundtrips to/from client
              4  sorts (memory)
              0  sorts (disk)
          64177  rows processed
    SQL> SELECT status
      2    FROM btree_table
      3   WHERE status='Y'
      4  /
    64177 rows selected.
    Execution Plan
       0      SELECT STATEMENT Optimizer=ALL_ROWS (Cost=27 Card=64177 Byte
              s=128354)
       1    0   INDEX (FAST FULL SCAN) OF 'BTREE_TABLE_IDX' (INDEX) (Cost=
              27 Card=64177 Bytes=128354)
    Statistics
            171  recursive calls
              0  db block gets
           4429  consistent gets
              0  physical reads
              0  redo size
         804546  bytes sent via SQL*Net to client
         475281  bytes received via SQL*Net from client
           4280  SQL*Net roundtrips to/from client
              4  sorts (memory)
              0  sorts (disk)
          64177  rows processed
    SQL> SELECT index_name,blevel,num_rows
      2    FROM user_indexes
      3   WHERE index_name='BTREE_TABLE_IDX'
      4  /
    INDEX_NAME                        BLEVEL  NUM_ROWS
    BTREE_TABLE_IDX                        1     64177
    SQL> SELECT index_name,blevel,num_rows
      2    FROM user_indexes
      3*  WHERE index_name='BITMAP_TABLE_IDX'
    SQL> /
    INDEX_NAME                        BLEVEL  NUM_ROWS
    BITMAP_TABLE_IDX                       1         3Khurram

  • PDF Output and Index question

    I know its a little unorthodox - but due to some security and
    search-spider constraints for a project I'm on, we need to post PDF
    documentation to our website rather than HTML Help or WebHelp.
    My question is this - is there a way when generating a PDF
    from RoboHelp, to have the index that was created in the project be
    converted to a PDF page(s) with links in it that when clicked will
    take the user to the location in the PDF where the word they
    clicked on in the index appears.
    Basically, I'm looking for the index in the PDF to function
    much the same as it would in HTML Help or WebHelp.

    The reason for needing the PDF is a bit convoluted, but I
    will try to explain it to the best of my ability.
    First, the content which the client is trying to post is
    actually not a "Help" system, but actually more of a document, a 50
    page guide to be exact.
    As for security, CHM files are out of the question as using
    them would require changes to our servers which our Information
    Security team refuses.
    As for the WebHelp format, as the document itself is part of
    a larger website, all of its content need to accessible to
    search-engine spiders. Since Web Help utilizes frames, and frames
    are a roadblock for spiders, the only way around this would be a
    decent amount of additional development to make each section of
    content within the Web Help directly accessible through some
    indexed page of links - which isn't practical for us with this
    document.
    Therefore we chose PDF, as it is indexible and the contents
    can be read by spiders. The reason the client would like to use
    RoboHelp to generate the PDF is in order to take advantage of the
    Content Managment features it provides.

  • Switching from PC to Mac, have question about processing speed

    Hi, I am going to buy a used Mac off of ebay and I am pretty clueless about where to start. I am not very knowledgable when it comes to the difference in processing speeds on pc's and apples both and have just been using what has been passed along. I am on a budget and cannot afford to do a bunch of upgrades when I get a new (to me) apple. I am currently working on a old pc (8 years?) that has had as many upgrades as it can handle. It is a Pentium III 548 mhz, 384 mb of ram and 40g hd on Windows XP. The Apple that I am stuck on getting right now is a G3 with a bunch of upgrades. It is an Imac PowerPC g3 with a 600mhz processor, 512 mb of memory, 120 gb hard drive. It has OS X 10.4 Tiger and ILife and has a single processor. It has a airport extreme card also. My question is 1) Does it take less mhz and ram to run the operating system for an apple and 2) am I going to notice a big difference in speed between what I have now and with the apple? The g3 is $249 and comes with a 90 day guarantee minus shipping. Any help is appreciated, thanks in advance!
    Tracy

    I'd save your money or apply it to an Apple G4 (AGP or newer) tower. Depending on how it was configured, you may find that your PIII was a little faster loading web pages than the 600 MHz iMac. My 1 GHz P3s are faster than my iMac DV G3-400 MHz models. I think $249 plus shipping for a 5 or 6 year-old iMac is a bit high. I picked up one of my G3-400 MHz iMac DVs for $20 at a thrift store. If there are any in your area that accept donated computers, you might want to check for a local bargain. While the 600 MHz model that you described is loaded, that's about as far as it goes, other than bumping the memory to 1 GB, but that would likely involve removing a pair of 256 MB DIMMs already in it, and buying a pair of 512 MB DIMMs. If it has a single 512 MB DIMM installed, you wouldn't have to spend as much. Selling the computer with Tiger installed but without the OS installer media will eventually put you in a bind, when you'll need to boot from the installer CD to perform disk repair maintenance. Some may consider Tiger to be a bit demanding of that iMac's system resources, preferring the predecessor "Panther" instead. The bottom line is that the iMac is somewhat limited, in terms of design (no PCI slots) and/or future upgrade potential. If it will suit your needs as a starter Mac, try finding a comparable model for less. You might also check for used computer dealers in your area, as that saves the cost of shipping, and return shipping - if there's a problem. The G4 tower has more potential for hardware tweaking, if you decide to stick with the Mac OS. You can find G4 towers at eBay selling for less than $249, but you'd need to have a monitor. Is space at a premium, in terms of a preference for the iMac's compact form factor? There is a direct correlation between the OS version and the optimal processor speed and amount of installed memory to adequately run it. OS X needs/uses more memory than the older, pre-OS Mac versions, just as Vista or XP2 requires more memory than Windows Me or 98SE. A comparison of Intel's P3s to Apple's G3s or G4s gets into the debate over which one handles more instructions per clock cycle than the other at a given speed. It's difficult to do a head-to-head comparison, because the hardware and software is not the same.

  • Quick MySQL Indexing question

    Hi all,
    I know this is not strcitly a JDBC problem, but i was wodering if you have any ideas anyhow...
    I've implemented an Index on two of my columns in a table in my MySQL db. My question is, when i insert new rows into the database, is the index automatically updated? If not, how do I go about updating the Index?
    Many thanks,
    BBB

    I thought the database would manage that for you. AFAIK, you don't have to do anything. That's the way it is in Oracle. I thought that was true for all RDBMS. - MOD

  • Oracle Warehouse Builder Question about Process Flow

    Oracle 11.1.0.7:
    We currently have various mappings and what we want is in process flow to fork and merge back and only if all those mappings are successfully completed then go further in the process flow. I see that it allows fork for parallel processing but there is no merge. So how do I merge them all back and go to the next step in process flow. Next step of process flow is dependent on successful completion of all the previous mappings. And we want all the previous mappings to be executed in parallel for performance reasons.

    Could someone help answer my question?

  • Create Index fail in process chain

    Hi Friends,
    I have a process chain in production system which loads inventory data everyday at 6:00Am.
    2 days back R3 production system failed and the data is not loaded into BI.  It was stopped at Delete Index step. I have started again by selecting Repeat then it worked and the infopackage is also executed successfully.
    After that it is failed at Create index step and giving message " Job cancelled after system exception ERROR_MESSAGE". Till then it is giving same problem everyday.
    Could you guys help me how to solve this and execute the chain successfully.
    Thanks & Regards

    Hi,
    Try to create index for the cube manually i.e got info cube , Right Click -> manage->performance-> perform check index - if red do repair index, if it will not help then create index. After that run it though process chain
    Hope it will resolve your problem.
    Sangita

  • Question on Processing Pattern Sessions - not behaving as expected

    I have implemented some schedulable jobs in my extensible cache configuration that are scheduled at a fixed rate and then use the processing pattern to submit work to the grid.
    However I am seeing some unexpected behaviour which does not seem to make much sense. My jobs are submitted to the grid as follows (some code edited for brevity):-
    @Override
         public void run() {
              ProcessingSession session = null;
              try {
              session = new DefaultProcessingSession(StringBasedIdentifier.newInstance("MySession"));
              SubmissionOutcome outcome = session.submit(this, new DefaultSubmissionConfiguration(),
         new TaskSubmissionCallback(taskName));
              catch (Throwable t) {
                   log.error("Failed to Submit Process Pattern Task [{}] For Session [{}]", taskName, nodeName);
              finally {
                   try {
                        session.shutdown();
                   catch (Throwable t) {
                        log.error("[{}] Failed to Shutdown Processing Pattern Session [{}]", this, nodeName);
    My tasks get scheduled and then submiited and executed in the grid. I am currently only running a single node through eclipse for testing.
    But after the task has excecuted my TaskSubmissionCallback class gets invoked and the onDone() gets called and returns my result:-
    public void onDone(Object oResult)
         log.debug("[{}] Submission done - Result = [{}]", m_sTaskName, oResult);
    So all is working. However a couple of milliseconds later I see the following in the logs:-
    2012-04-03 17:15:50.407/19.274 Oracle Coherence GE 3.6.0.4 <Error> (thread=DistributedCache:DistributedServiceForProcessingPatternSubmissionResults:EventDispatcher, member=1): The following exception was caught by the event dispatcher:
    2012-04-03 17:15:50.407/19.274 Oracle Coherence GE 3.6.0.4 <Error> (thread=DistributedCache:DistributedServiceForProcessingPatternSubmissionResults:EventDispatcher, member=1):
    java.util.concurrent.RejectedExecutionException
         at java.util.concurrent.ThreadPoolExecutor$AbortPolicy.rejectedExecution(ThreadPoolExecutor.java:1760)
         at java.util.concurrent.ThreadPoolExecutor.reject(ThreadPoolExecutor.java:767)
         at java.util.concurrent.ScheduledThreadPoolExecutor.delayedExecute(ScheduledThreadPoolExecutor.java:216)
         at java.util.concurrent.ScheduledThreadPoolExecutor.schedule(ScheduledThreadPoolExecutor.java:366)
         at java.util.concurrent.ScheduledThreadPoolExecutor.execute(ScheduledThreadPoolExecutor.java:438)
         at com.oracle.coherence.patterns.processing.internal.DefaultProcessingSession.removeCacheObjectsAsynch(DefaultProcessingSession.java:313)
         at com.oracle.coherence.patterns.processing.internal.DefaultProcessingSession.handleResultChange(DefaultProcessingSession.java:288)
         at com.oracle.coherence.patterns.processing.internal.DefaultProcessingSession$1.onMapEvent(DefaultProcessingSession.java:204)
         at com.tangosol.util.MultiplexingMapListener.entryUpdated(MultiplexingMapListener.java:42)
         at com.tangosol.util.MapEvent.dispatch(MapEvent.java:270)
         at com.tangosol.util.MapEvent.dispatch(MapEvent.java:226)
         at com.tangosol.util.MapListenerSupport.fireEvent(MapListenerSupport.java:557)
         at com.tangosol.coherence.component.util.SafeNamedCache.translateMapEvent(SafeNamedCache.CDB:7)
         at com.tangosol.coherence.component.util.SafeNamedCache.entryUpdated(SafeNamedCache.CDB:1)
         at com.tangosol.util.MapEvent.dispatch(MapEvent.java:270)
         at com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.partitionedService.PartitionedCache$ViewMap$ProxyListener.dispatch(PartitionedCache.CDB:22)
         at com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.partitionedService.PartitionedCache$ViewMap$ProxyListener.entryUpdated(PartitionedCache.CDB:1)
         at com.tangosol.util.MapEvent.dispatch(MapEvent.java:270)
         at com.tangosol.coherence.component.util.CacheEvent.run(CacheEvent.CDB:18)
         at com.tangosol.coherence.component.util.daemon.queueProcessor.Service$EventDispatcher.onNotify(Service.CDB:26)
         at com.tangosol.coherence.component.util.Daemon.run(Daemon.CDB:42)
         at java.lang.Thread.run(Thread.java:619)
    I have managed to solve this by removing the session.shutdown() call in the class that submits the job for processing using the Processing Pattern. However this seems odd to me
    as the submitter should not need to hang around until the job completes (as that is surely the point of the Callback handler). I can of course code around this by having a Singleton
    class which keeps the Processing session alive constantly and stores this in the Environment. But the question is why ???
    This is running Coherence 3.6 and coherence-processingpattern-1.3.423238.
    Would be grateful to know if this is a bug or my understanding is somehow confused !
    TIA
    Martin

    I agree, believe me. However, one goes to war with the army one has, not the army one wishes one had, to quote somebody who, um... okay, failed miserably. Hmm...
    It's a flat-rate project, so the troubleshooting isn't costing them any more, and their IT department wags the rest of the company and won't buy stuff. Eventually they'll have CS4 and these problems will go away, but probably not until next year.
    If I were willing to give up the CS4-specific features (which I'm not; maximum efficiency in long documents is the core competency of my business), working in CS3 still wouldn't be an option because I don't own it. I'm also not likely to be able to talk the client into the idea that I'll take care of the last-minute tweaks instead of their having to do it all.
    (Keep in mind, too, that there's more behind the scenes than I'm necessarily sharing in a quick forum post. If I can keep the typesetting from shifting in the .inx for now, I'm good.)
    I filed that bug report--thanks!
    UPDATE: Client says it looks right. Off we go...

  • Question on process flow

    hi guys,
    i need some input here. hope u guys can give me some. i've created some process flows. when i'm trying to deploy my process flows, i got an error saying that i don have a workflow repository in my target database. so i can't do my deployment. fine coz i understand that.
    i've actually installed the oracle workflow from the 9i database cd coz i found there's an option i can choose from. and i've checked it thru universal installer, it has been installed. so my questions are
    1. Can i use the oracle workflow from the 9i Database cd?
    2. If yes, where can i setup the workflow rep? coz i didn't find anything that i can use to do the setup. i'm actually running my db on a NT machine.
    3. If no, does that mean i have to install the oracle workflow from it's own cd?
    hope to get some responses. thanks in advance
    regards,
    ykl

    hi guys,
    any idea what error is this?
    RPE-02085: Failed to test wb_rti_workflow_util.initialize through deployed Workflow Database Link ADMINX.US.ORACLE.COM@WB_LK_TEST_PF. Please check that "EXECUTE ANY PROCEDURE" privilege is set and that the OWB Runtime is available.
    - ORA-06550: line 1, column 7:
    PLS-00201: identifier '[email protected]@WB_LK_TEST_PF' must be declared
    ORA-06550: line 1, column 7:
    PL/SQL: Statement ignored
    i've register my location for the workflow. but when i try to deploy my process flows, it gives me this error. do i need to set any privileges?? please advice.
    regards,
    ykl

  • How to create process variant for delete index in a process chain

    Respected all
    I am creating a process chain, but unable to create process variant for the delete index. kindly tell me the
    step by step proceure for creating the variant for delete index. also if we use variant which is previously prepared then will it be safe? will it affect the other running process chian.
    pls reply
    thanks
    abhay

    Hi,
    Please do not use an already created variant for index deletion. It might delete indices of some other cube.
    You can follow the following steps -
    1. Open the chain in Edit mode and click on "Process Types" button
    2. On the Left hand side you will have various process types. Choose "Data Target Administration" and expand it.
    3. Choose the first process "Delete Index" (it will be marked with a trash bin sign) and drag it to your chain
    4. It will ask you create a variant. Press the create Button
    5. In new window enter the Process Variant technical name and description
    6. Then in new window choose Object Type = Cube through dropdown and Object name via browsing. After choosing the cube name click on "Transfer Selections"
    7. Save and return to your chain
    8. It will automatically generate a create index step also after delete index.
    9. You need to break the link between create and delete step and insert infopackage in between to get the following steps - delete index --> load cube --> create index.
    Please let me know if this is helpful..
    Regards
    nishant

  • Panel Decoration Reference index Question

    I have a front panel that uses native LV controls so I avoid scaling objects with monitor resolution as I have not had the best results with this when using native LV controls and indicators.
    The front panel was laid out on a 1680 X 1050 widescreen monitor.  I want to use it on my laptop that uses 1280 X 1024.  I have the verticle taken care of by ensuring all the controls fit within th everticle area.  But I need help with the horizontal scrolling.
    I am trying to limit the left and right scrolling to not extend beyond the furthest most left decoration and the furthest right decoration.  I used PANEL - DECORATION property nodes to get access to the decoration bounds so I can set the Left, Top and Right Bottom of the FP.BOUNDS,
    But I have not had any luck getting this to limit the horizontal scrolling,  it still goes to far left and to far right.
    Another question is - How do i find out the index of a decoration when there are a lot of decorations on the front panel.
    Thanks
    Tim Crouse
    1:30 Seconds ARRRGHHH!!!! I want my popcorn NOW! Isn't there anything faster than a microwave!

     Tim,
    You can get the reference of a specific decoration by looking at the size and/or color. Using Width, Height, and Color, you should be able to pin down the exact decoration that you want a reference of. The attched picture just looks at Height.
    ...(I'm getting tired of the editor eating my posts)...
    Richard
    Attachments:
    specific decoration ref.gif ‏50 KB

Maybe you are looking for

  • Mail Adapter to Soap Adapter keeping the attachments - How?

    Hi guys, I am working on a scenario where I should pull emails from an Exchange server and I should forward them (with their attachments) to a separate systems via a Web Service. So far I have been able to pull the emails using the Mail Adapter and I

  • Re-installing OS on an R30 + Lenovo drivers + ThinkVantage tools

    My old R30 experienced a smash that ruined the internal HDD. Now, I'm going to replace the HDD and install the OS from scrach, say, Win XP with SP3. Where can I download all the proprietary Lenovo software for R30? 

  • Is adobe photoshop the best product for me?

    I am looking to design my own website and have been advised that I will need 'Adobe Photoshop CS+'. My requirement would be editing photos to remove default background, adding text and being able to move individual parts of the photo around. For exam

  • Should Migration Assistant be this slow?

    Hey gang- Just got a brand new late-2013 iMac to replace my mid-2007 iMac. Migration Assistant is currently in progress, but I'm amazed at how slow it is going.   It started about 30 minutes ago and is showing 36 hours 45 minutes remaining!   The sou

  • My photo stream is not working.

    my photo stream is not working on my pc