Data Integrator complex job hangs after workflow completion

Post Author: Iomega
CA Forum: Data Integration
I have a Data Integrator complex job, and if it fails with an error and I try to rerun it from the designer, it hangs after workflow completion.  So I have to replicate the job and manually run it, removing each workflow as it completes.  Our other DI jobs don't do this, they just flow to completion.  The data warehouse is an Oracle database.  Does anyone know how to correct this?

Post Author: Iomega
CA Forum: Data Integration
I have a Data Integrator complex job, and if it fails with an error and I try to rerun it from the designer, it hangs after workflow completion.  So I have to replicate the job and manually run it, removing each workflow as it completes.  Our other DI jobs don't do this, they just flow to completion.  The data warehouse is an Oracle database.  Does anyone know how to correct this?

Similar Messages

  • Compressor hangs after 94% complete, 28 hours "Error: unrecognized request"

    After 28 hours I realized the job was hung. The last good log entry before the errors seems to suggest the the job was done anyway. Is this a correct assumption?
    The compressed file "seems" ok to me so I duplicated it in case Compressor wants to throw it away when I cancel the job.
    Can I use the file I duplicated or could it be bad?
    Can I avoid this error in the future?
    I was using the standard H264.LAN setting with audio disabled.
    Here is a snippet of the log -- the "error" log entry is repeated about 100 or more times.
    <log tms="204979368.290" tmt="07/01/2007 06:42:48.290" pid="272" msg="Done _processRequest for job target: file://localhost/Volumes/M/C/A.mov-1"/>
    <mrk tms="204979368.294" tmt="07/01/2007 06:42:48.294" pid="272" kind="end" what="service-request" req-id="210A219B-B3D3-484D-B58B-20E328CF3724:1" msg="Processing service request end."></mrk>
    <mrk tms="204979486.606" tmt="07/01/2007 06:44:46.606" pid="272" what="service-request" req-id="210A219B-B3D3-484D-B58B-20E328CF3724:1" msg="Error: unrecognized request."></mrk>

    I confirm I have the same circumstance with compressor via QMASTER with either local or distributed vitiual clusters.
    In varied cases, the output from the transcode is usable..
    Saftey first - more often than not the transcode output is OK IF the QMASTER VIRTUAL cluster is localised on the same host and not distributed over other hosts.
    FWIW: the transcodes using compressor only always work when there is only one instance running. Simply this is when compressor submits the work to "this computer".
    However in this workflow, the downside is that the compressor transcoder does not utilise the cores. On this dual quadcore octo, each core is bout 20%-30% used. Again reaosns for this are documented by APPLE and also people on these forums I believe.

  • All BW Broadcast job hang after we restart server.

    Dear all expert,
    We encountering an issue where all the broadcast job for BW hang when we restart server.
    Any advice?
    Thank you.

    > We encountering an issue where all the broadcast job for BW hang when we restart server.
    What means "hang"? Use SM37, check for active jobs and use the menu to "check status".
    Markus

  • Start one job after another complets using PL/SQL procedure and DBMS_JOB

    All,
    I am attempting to refresh a materialized view using DBMS_JOB and having a PL/SQL program loop through each materialized view name that resides in a table I created. We do the table because they have to be refreshed in a specific order and I utilize the ORDER_OF_REFRESH column to dictate which MV comes first, second, third, etc.
    Now - I have this working to the extent that it kicks off 4 materialized views (currently set the procedure to only do 4 MVs for testing purposes) but I would ultimately like the procedure to create a new DBMS_JOB that calls DBMS_MVIEW.REFRESH of the next view in line ONLY after the preceeding materialized view DBMS_JOB completes.
    The purpose of all of this is to do a few things. One - if I simply create a procedure with the DBMS_MVIEW.REFRESH call to each materialized view in order - that works but if one fails, the job starts over again and will up to 16 times - BIG PROBLEM. Secondly, we want the job that will call this procedure to fail if it encounters 2 failures on any one materialized view (because some MVs may be dependant upon that data and cannot use old stale data).
    This may not be the "best" approach but I am trying to make the job self-sufficient in that it knows when to fail or not, and doesn't kick off the materialized views jobs all at once (remember - they need to start one after the other - in order).
    As you can see near the bottom, my logic doesn't work quite right. It kicks off all four jobs at once with the date of the whatever LAST_REFRESH is in my cursor (which ultimately is from the prior day. What I would like to happen is this:
    1.) 1st MV kicks off as DBMS_JOB and completes
    2.) 2nd MV kicks off with a start time of 3 seconds after the completion of 1st MV (based off LAST_REFRESH) date.
    3.) This conitnues until all MVs are refresh or until 2 failures are encountered, in which no more jobs are scheduled.
    - Obviously I am having a little bit of trouble with #2 and #3 - any help is appreciated.
    CREATE OR REPLACE PROCEDURE Next_Job_Refresh_Test2 IS
    V_FAILURES NUMBER;
    V_JOB_NO NUMBER;
    V_START_DATE DATE := SYSDATE;
    V_NEXT_DATE DATE;
    V_NAME VARCHAR2(30);
    V_DELIMITER VARCHAR2(1);
    CURSOR MV_LIST IS SELECT DISTINCT A.ORDER_OF_REFRESH,
                                  A.MV_OBJECT_NAME
                        FROM CATEBS.DISCO_MV_REFRESH_ORDER A
                        WHERE A.ORDER_OF_REFRESH < 5
                   ORDER BY A.ORDER_OF_REFRESH ASC;
    CURSOR MV_ORDER IS SELECT B.ORDER_OF_REFRESH,
                                  B.MV_OBJECT_NAME,
                                  A.LAST_REFRESH
                             FROM USER_SNAPSHOTS A,
                                  DISCO_MV_REFRESH_ORDER B
                             WHERE A.NAME = B.MV_OBJECT_NAME
                        ORDER BY B.ORDER_OF_REFRESH ASC;
    BEGIN
    FOR I IN MV_LIST
    LOOP
    IF I.ORDER_OF_REFRESH = 1
    THEN V_START_DATE := SYSDATE + (30/86400); -- Start job one minute after execution time
              ELSE V_START_DATE := V_NEXT_DATE;
    END IF;
         V_FAILURES := 0;
         V_JOB_NO := 0;
         V_NAME := I.MV_OBJECT_NAME;
         V_DELIMITER := '''';
    DBMS_JOB.SUBMIT(V_JOB_NO,'DBMS_MVIEW.REFRESH(' || V_DELIMITER || V_NAME || V_DELIMITER || ');',V_START_DATE,NULL);
              SELECT JOB, FAILURES INTO V_JOB_NO, V_FAILURES
              FROM USER_JOBS
              WHERE WHAT LIKE '%' || V_NAME || '%'
              AND SCHEMA_USER = 'CATEBS';
    IF V_FAILURES = 3
    THEN DBMS_JOB.BROKEN(V_JOB_NO,TRUE,NULL); EXIT;
    END IF;
    FOR O IN MV_ORDER
    LOOP
    IF I.ORDER_OF_REFRESH > 2
    THEN V_NEXT_DATE:= (O.LAST_REFRESH + (3/86400)); -- Start next materialized view 3 seconds after completion of prior refresh
    END IF;
    END LOOP;
    END LOOP;
    EXCEPTION
    WHEN NO_DATA_FOUND
         THEN
              IF MV_LIST%ISOPEN
                   THEN CLOSE MV_LIST;
              END IF;
    NULL;
    END Next_Job_Refresh_Test2;
    ---------------------------------------------------------------------------------------------------------------------

    Justin,
    I think I am getting closer. I have a procedure shown just below this that updates my custom table with information from USER_SNAPSHOTS to reflect the time and status of the refresh completion:
    CREATE OR REPLACE PROCEDURE Upd_Disco_Mv_Refresh_Order_Tbl IS
    V_STATUS VARCHAR2(7);
    V_LAST_REFRESH DATE;
    V_MV_NAME VARCHAR2(30);
    CURSOR MV_LIST IS SELECT DISTINCT NAME, LAST_REFRESH, STATUS
                             FROM USER_SNAPSHOTS
                        WHERE OWNER = 'CATEBS';
    BEGIN
    FOR I IN MV_LIST
    LOOP
         V_STATUS := I.STATUS;
         V_LAST_REFRESH := I.LAST_REFRESH;
         V_MV_NAME := I.NAME;
    UPDATE DISCO_MV_REFRESH_ORDER A SET A.LAST_REFRESH = V_LAST_REFRESH
    WHERE A.MV_OBJECT_NAME = V_MV_NAME;
    COMMIT;
    UPDATE DISCO_MV_REFRESH_ORDER A SET A.REFRESH_STATUS = V_STATUS
    WHERE A.MV_OBJECT_NAME = V_MV_NAME;
    COMMIT;
    END LOOP;
    END Upd_Disco_Mv_Refresh_Order_Tbl;
    Next, I have a "new" procedure that does the job creation and refresh show just below this which, when starting the loop, sets the LAST_REFRESH date in my table to NULL and the STATUS = 'INVALID'. Then if the order of refresh = 1 then it uses SYSDATE to submit the job and start right away, else if it's not the first job, it uses V_NEXT_DATE. Now, V_NEXT_DATE is equal to the LAST_REFRESH date from my table when the view has completed and the V_PREV_STATUS = 'VALID'. I think tack on 2 seconds to that to begin my next job.... See code below:
    CREATE OR REPLACE PROCEDURE Disco_Mv_Refresh IS
    V_FAILURES NUMBER;
    V_JOB_NO NUMBER;
    V_START_DATE DATE := SYSDATE;
    V_NEXT_DATE DATE;
    V_NAME VARCHAR2(30);
    V_PREV_STATUS VARCHAR2(7);
    CURSOR MV_LIST IS SELECT DISTINCT A.ORDER_OF_REFRESH,
                                  A.MV_OBJECT_NAME,
                                  A.LAST_REFRESH,
                                  A.REFRESH_STATUS
                        FROM CATEBS.DISCO_MV_REFRESH_ORDER A
                        WHERE A.ORDER_OF_REFRESH <= 5
                   ORDER BY A.ORDER_OF_REFRESH ASC;
    BEGIN
    FOR I IN MV_LIST
    LOOP
    V_NAME := I.MV_OBJECT_NAME;
    V_FAILURES := 0;
    UPDATE DISCO_MV_REFRESH_ORDER SET LAST_REFRESH = NULL WHERE MV_OBJECT_NAME = V_NAME;
    UPDATE DISCO_MV_REFRESH_ORDER SET REFRESH_STATUS = 'INVALID' WHERE MV_OBJECT_NAME = V_NAME;
    IF I.ORDER_OF_REFRESH = 1
    THEN V_START_DATE := SYSDATE;
    ELSE V_START_DATE := V_NEXT_DATE;
    END IF;
    DBMS_JOB.SUBMIT(V_JOB_NO,'DBMS_MVIEW.REFRESH(' || '''' || V_NAME || '''' || '); BEGIN UPD_DISCO_MV_REFRESH_ORDER_TBL; END;',V_START_DATE,NULL);
    SELECT A.REFRESH_STATUS, A.LAST_REFRESH INTO V_PREV_STATUS, V_NEXT_DATE
    FROM DISCO_MV_REFRESH_ORDER A
    WHERE (I.ORDER_OF_REFRESH - 1) = A.ORDER_OF_REFRESH;
    IF I.ORDER_OF_REFRESH > 1 AND V_PREV_STATUS = 'VALID'
    THEN V_NEXT_DATE := V_NEXT_DATE + (2/86400);
    ELSE V_NEXT_DATE := NULL;
    END IF;
    END LOOP;
    EXCEPTION
    WHEN NO_DATA_FOUND
         THEN
              IF MV_LIST%ISOPEN
                   THEN CLOSE MV_LIST;
              END IF;
    NULL;
    END Disco_Mv_Refresh;
    My problem is that it doesn't appear to be looping to the next job. It worked succesfully on the first job but not the subsequent jobs (or materialized views in this case).... Any ideas?

  • IMovie Hangs After Full Quality Sharing Is Complete

    Hi,
    Why does this happen? I did a Full Quality Share on a 2-1/2 hour multi-clip movie (using Share Selected Clips Only). The export progress dialog hung after the export was clearly finished. I eventually had to Force Quit iMovie.
    The exported movie looks fine in QT Pro. However, Movie Properties in QT shows that this movie is 24 frames longer than what I exported from iMovie. Doesn't seem to be a big deal, but annoying nonetheless.
    I'm more concerned about iMovie hanging after a long movie export as described.

    2-1/2 hour multi-clip movie (using Share Selected Clips Only)
    so, your project is in timeline even longer then 150min? how much did you import into iM, to create such a "beast"? must be a gigantic project, hm?... Titanic II... ;-))
    iM is a consumer product, we do read of reports here, that from some point of size (imported data, length, complexity of project), iM is a little "overwhelmed..." esp., iM is part of iLife, meaning, mainly meant exports go to tape (=60min max.) or iDVD (120min max.)
    secondly, MacOsX is a UNIX system, which makes excessive use of socalled temp-files on (startup-) harddrive... it is recommended, not to overcrowd the drive 80 - 90% of max.
    probably, your ~35GB of data transfer from a ~60GB project was simply too much for your Mac?
    helpful?

  • Record does not "check in" after workflow is complete

    Hi All,
    I have built a very simple workflow as below
    1. start step
    2. process step
    3. approve step
    4. stop step
    in the fourth step that is the stop step i have selected "check in" so that the record automatically check's in after the process is complete,
    but the record does not check in and the data manager shows it as in workflow. please help
    Thanks in Advance
    Sharma.

    Hello Abhishek,
    Thanks for the reply.
    The issue is resolved.
    The user had the authorizations and the records were checked out as well.
    i was missing the step of "Mark as approved" that was the reason for the records not to check in automatically.
    Thanky you again.
    Regards,
    Sharma

  • "Modified By" Field Value is same "Created By" field value after the workflow completion.Sp 2010 workflows

    Hi All,
    I have a work flow A attached to List List1
    I have Added an item(now created by and modified by are same)
    Next Person B modified
    Ideally created by value and modified by value should be different, but they are same after the workflow completion
    This is very strange as i am not modifying  modified by field any where
           Please let me know any one has faced similar problem
    Thanks
    Ravi
    Ravi

    The workflow will run as the person who initiated it. As such if the workflow starts when an item is created then it'll be running as the user who created the item.
    When the workflow changes anything the 'modified by' field will be updated to show the identity the workflow is running under. I think that explains your behaviour?
    To change it you could use an impersonation step but that would simply replace one name with another. I don't think you've got access to the 'SystemUpdate()' method in workflows which would allow you to avoid updating the modfiied by and modified date fields.

  • Data integrator web administrator fails to pull batch job status

    We use DB2 v8.1.17.644 with fixpack 17 database for the repository. With fix pack 10 everything worked properly. After the upgrade to fix pack 17 on database server residing on linux operating system and the data integrator server installed on windows 2003 server, it fails. Does anyone have the similar issues or a solution for this problem.
    Edited by: Madhu Allani on Oct 17, 2008 7:49 PM

    the issue with ALVW_HISTORY failing for DB2 is fixed in DS XI 3.1
    for 11.7 you need to modify the view definition, the problem happens if you are using System configuration to run the Job. the view is failing to convert string to number
    search for following in your view definition of ALVW_HISTORY
    L.OBJECT_KEY      = CAST ( SC.VALUE as numeric)
    and replace it with the following, it should work fine after that
    CAST (L.OBJECT_KEY as char) = SC.VALUE

  • SM36 job --- next  job should start after the completion of before job

    Hi All,
    I have a job which is running hourly . My requirement next  job should start after the completion of before job .
    forexample :
    It will run evenry hour like ,---  8 am,9 am,10 ,11 etc..
    My requirement
    9am job should  start after completion of 8 am job ...  Can you please let me know ... How can i achieve this thru SM36.
    Thanks,
    Tashvi

    Hi Tashvi,
    Start Criteria :
    From the initial define background job screen, click on the u2018Start dateu2019 pushbutton to indicate the start criteria for the background job. The options available for scheduling jobs are:
    Immediate - allows you to start a job immediately.
    Date/Time - allows you to start a job at a specific date/time.
    After Job - allows you to start a job provided that another job has completed.
    After Event - allows you to specify the event when you want your job to run.
    At Operation Mode - allows you to start a job when the system switches to a particular operation mode.
    Scheduling Job after another Job :
    The u2018After Jobu2019 pushbutton allows you to start a job provided that another job has been completed. With this option, you must specify the name of the job that must be completed for the current job to run. If the u201CStart status-dependentu201D option is checked, the current job will only run if the specified job ends successfully. This option is useful in cases where the current job depends on the outcome of the specified job.
    Hope this Helps.
    Regards,
    Chandravadan

  • Unable to install Data Integrator Job Server

    When attempting to install Business Objects XI (R2) Data Integrator I am unable to check the 'Job Server' component to be installed. I have tried all the different licenses and options we have available but am unable to get around this.
    Has anyone else experienced this before? Is there another forum that I should be looking for this type of thing?
    Thanks,
    Ben

    I was able to resolve this by using the following steps. Hope this helps anyone out there in the same boat...
    PRE-INSTALLATION INSTRUCTIONS
    Data Integrator products are available for download from the ESD site at http://businessobjects.subscribenet.com. The License Authorization Code above gives you access to product license file generation. While your Data Integrator software order is being processed, you can prepare for installation by using the Authorization Code as follows:
    1)     Determine an installation plan.
    The plan should include which Data Integrator components will be installed on what computers. Only the Job Server licenses will require a hostid in order to generate the license files. Administrator, Designer, and Interfaces for Designer such as SAP R/3 ABAP, JD Edwards, or PeopleSoft do not require hostids in v6.5 and higher.
    2)     Download your license files:
    a)     Go to http://webkey.businessobjects.com on the Internet.
    b)     Copy the Authorization Code and paste it into the License Authorization text box.
    c)     Click the 'SUBMIT' button.
    The License Fulfillment page lists products in your order for which you can generate license file(s). This page allows you to generate license files one at a time, at your convenience. Only the Job Server licenses will require a hostid for versions 6.5 and higher.
    Note: if the computer has more than one network card, include all hostids separated by a space to create a nodelocked license file. To determine the hostid on Windows machines, run autorun.exe from the CD or the Data Integrator directory. When autorun.exe brings up the splash screen select the "GET HOSTID" button.
    To determine the hostid on HP machines, run \unix\hpux1100\lmhostid from the mounted CD or the Data Integrator directory. Or use the lmhostid -long command for HP-Itanium.
    To determine the hostid on SUN machines, run \unix\sunos\lmhostid from the mounted CD or the Data Integrator directory.
    To determine the hostid on AIX machines, run \unix\aix430\lmhostid from the mounted CD or the Data Integrator directory.
    Note: For Windows installations, select the "Ethernet" hostid type.
    For UNIX installations, select "Long" hostid type.
    d)     Save each license file in with a *.lic extension in the planned location. For the Data Integrator installation program to complete successfully, you are required to point to the location of the license file(s) associated with the component(s) you are installing.

  • Problem in reading data from serial port continuously- application hangs after sometimes

    I need to read data from two COM port and order of data appearance from COM port is not fixed. 
    I have used small timeout and reading data in while loop continously . If my application is steady for sometime it gets hangs and afterwards it doesnt receive any data again. 
    Then I need to restart my application again to make it work.
    I am attaching VI. Let me know any issue.
    Kudos are always welcome if you got solution to some extent.
    I need my difficulties because they are necessary to enjoy my success.
    --Ranjeet
    Attachments:
    Scanning.vi ‏39 KB

    billko wrote:
    Ranjeet_Singh wrote:
    I need to read data from two COM port and order of data appearance from COM port is not fixed. 
    I have used small timeout and reading data in while loop continously . If my application is steady for sometime it gets hangs and afterwards it doesnt receive any data again. 
    Then I need to restart my application again to make it work.
    I am attaching VI. Let me know any issue.
    What do you mean, "not fixed?"  If there is no termination character, no start/stop character(s) or even a consistent data length, then how can you really be sure when the data starts and stops?
    I probably misunderstood you though.  Assuming the last case is not ture - there is a certain length to the data - then you should use the bytes at port, like in the otherwise disastrous serial port read example.  In this case, it's NOT disastrous.  You have to make sure that you read all the data that came through.  Right now you have no idea how much data you just read.  Also, if this is streaming data, you might want to break it out into a producer/consumer design pattern.
    Not fixed means order is not fixed, data from any com port can come anytime. lenght is fixed, one com port have 14 byte and other 8 byte fixed..
    Reading data is not an issue for me as it works nice but I have a query that why my application hangs after sometime and stops reading data from COM PORT.
    Kudos are always welcome if you got solution to some extent.
    I need my difficulties because they are necessary to enjoy my success.
    --Ranjeet

  • Workflow Shows as 'In Progress' after Workflow has been completed.

    Hello All,
    I have created a custom workflow using SharePoint Designer. Within this workflow I have multiple 'approval process' tasks. In theory this was so that once the first user had approved the item then the next would be prompted to approve the item, and
    so on. The users that the item must be approved by are set when the item is submitted initially.
    Just so anyone reading this knows I have no formal experience/education in SharePoint workflow design, but I would like to think I know my way around SharePoint(in general) at this point.
    My problem is, the company I work for is just starting out using SharePoint workflows, and from what I understand workflows that are 'In Progress' are related to server performance. I noticed today that there are 5 items in the list which
    under the 'workflow status' column display that they are 'In Progress' which is entirely correct. However, when I go to -> 'List Settings' -> 'Workflow Settings' this workflow is showing 8 workflows 'In Progress'.
    Thank you to anyone who is able to help me with this,
    James

    Hi,
    According to your post, my understanding is that workflow shows as 'In Progress' after Workflow has been completed.
    To send the task one by one, you can select one at a time(serial) when you select Task Process Participants.
    Per my knowleadge, when you go to -> 'List Settings' -> 'Workflow Settings', it show all the workflows you associated to the list.
    To see the running workflow, you need to select an item, right click the title, and then select the workflow.
    However, each workflow enstance can only start once on an item.  In other word, you can not start the same workflow again untill the previous one is completed.
    As you said, workflow shows as 'In Progress'.
    Please make sure all the users have approved the tasks.
    Best Regards,
    Linda Li
    Linda Li
    TechNet Community Support

  • My ipod cclassic 80gb is hang after trying dat press and hold method then also its not working please tell me solution

    My ipod cclassic 80gb is hang after trying dat press and hold method then also its not working please tell me solution

    Thanks for your response and good luck wishes, I suspect I will need them!
    In principle, I agree re: the manufacturer's warranty. However, I am pretty upset that this is now my second iPod to develop a critical fault within weeks of the warranty expiring, and frankly, it is not unreasonable to expect a state-of-the-art $500 electronic device to last well beyond one year of life.
    I agree talking to Apple is not likely to do me any good (the clue is in how impossible they make it to talk to them in the first place) - but that is not necessarily OK. I expect I will have to pay money to get the battery replaced - again, not OK (full stop - but especially given the cost of the device and the money I have spent with Apple). Yes, the batteries have a limited lifespan, but it should last longer than this (and surely, I should notice a gradual decline in its functionality, not an instant stop).
    I will try Deggie's suggestion (see my reply post), but probably won't hold my breath (think I have already done this). I probably will have to get the new battery - and probably under my own steam. It is a principle at stake and I feel I should be able to let Apple know how I'm feeling - and am frustrated that they make this virtually impossible. It sends the very clear message that they are not interested in listening to their customers.

  • Data Integrator Web Service Job State

    I would like to know if there's any way, through DI's web services to know if a job is running on the job server or not.  I am building a .NET application that will allow a user to manually launch a job, but I need to prevent the users from launching the job if it's already running.
    I am using Data Integrator 7.2
    Thanks

    you can check the following post with similar discussions
    Re: Extracting JOB_ID for WebClient via webservice
    Extracting JOB_ID within the DS Job

  • Job Publication doesnt get changed data in Job Posting through workflow.

    Job Publication is not picking up the changed data in the Job Posting/Requisition through workflow.
    When I am changing the data in Job Posting and release it manually the changed data gets reflected in Job Publication but if I am releasing the Job Posting trough WF(automatically) then the Publication doesnt pick up the data.The workflows are working fine in the system still the problem exists.
    Thanks in advance for the reply.

    1-Log in portal with user id and pwd.
    2-Create a requisition initially.
    3-Create and release the Job posting(manually)
    4-Create and release the Job Publication(manually).
    5.Try editing the previous Job Posting and save it bu dont release the Job Posting manually.Now come to the personal page,when we again enter the same Job Posting ,the status of the Job Posting is set to "released" automatically by a Workflow.
    6.Now if we proceed for the Job Publication and try Displaying it,the edited changes in the Job POsting is not displayed.
    But if we have "released" the job posting manually then the changes are reflected in the Publication.
    The user wants to use the Workflow scenario and also wants the edited changes to be taken up by the Publication.
    Hope the following description helps out !
    thanks in advance.

Maybe you are looking for

  • Can't open Photoshop CS2 anymore!!

    After several months of being a satisfied Photoshop User, I suddenly cannot open Photoshop CS 2. I keep getting the message: "An error has been detected with a required application library and the product cannot continue. Please reinstall the applica

  • I upgraded my plan to 89.00 a year so i can convert word to pdf but it wont let me make changes

    i upgraded my plan to 89.00 a year so i can convert word to pdf. But it won't allow me to make changes. Do i have to pay additional?

  • Using a dot "." in defining/scoping variables

    I've always felt it was a best practice to scope EVERY CF variable and still do. However, a new co-worker to me feels that using a DOT in variable names could cause problems. I completely disagree but would like others opinions. To me scoping variabl

  • Can i use single node manager with two weblogic domain?

    I am very new to weblogic and node manager. i had created two domains in weblogic. (single node manger). Can i connect both domains with same nodemanger? How to do this?

  • Boleto-Brazil

    Hi, Boleto for Brazil. I have a Script which is tagged to a Payment method in FBZP set up and the payment medium program is RFFOBR_D.As desired it is triggering the Script and also the subroutines that are within the script.But the problem is it is n