Post Processing records in BAPI_PRODORDCONF_CREATE_TT

Hi All,
I am using BAPI_PRODORDCONF_CREATE_TT for Production Confirmation with Auto Goods receipt(GR) Feature.
But, in case any error in goods movement occurs system creates a post processing record( Visible in transaction COGI).
We don't want entires in COGI.In case, any error occurs system should terminate the session. The same control is maintain in SPRO. But, BAPI overrules that.
Kindly let me know which settings to be used in BAPI, to avoid COGI.
In Post Wrong entries field of BAPI , I have used ' " ( Blank) value.
Please suggest.

Hi Stuti,
What I suggest you is to use 2 BAPI's instead of 1.
First use BAPI_GOODSMVT_CREATE to carry out goods movements.
If this BAPI is successful then only execute the BAPI you are using only to carrying out the confirmations
This is the best way to control your confirmation for failed goods movements.
Regards,
Yogesh

Similar Messages

  • Post processing records deletion log in MF47 -reg

    Hi...
    How to know the MF47 post processing records deletion by users  ?
    some of the post processing records in MF47 are being deleted by the users with out processing in time
    we would like to know where this log will be there and we should be able to see the log like
    which user deleted which records on which date
    regards,
    madhu kiran

    hi,
    i have posted earlier on deletion of MF70 records -backdated backlogs which could not be processed
    now i have asked for tracking of post processing records deletion in MF47
    if some record is deleted then no way to track when and who has deleted  them ?
    regards,
    madhu kiran

  • Post processing records in MF47 and COGI

    Hi All,
    I have a query....
    In STD SAP will the postprocessing records created by Repetative Mfg. be visible in transaction COGI???
    And will the postprocessing records created by Descrite Mfg. be visible in transaction MF47???
    Regards,
    Vinayak.

    Hi ,
    In general for Discreate Mfg the post processing records are checked and cleared in Tcode : COGI.
    Whereas for REM Mfg it is : MF47.
    You will be able to view the REM postprocessing records in MF47, it is a standard behaviour of SAP , hence I can say there is no bug in your system.
    Hope this will help you.
    Regards
    radhak mk

  • TECO status for Prod order for which Post processing record exists

    Dear PP Gurus,
    We use  BAPI_PRODORD_COMPLETE_TECH to TECO production order. Program doesn't TECO orders , if there are Post processing records exsiitng for order giving message "Postprocessing records for order 1000068 prevent technical closing" .  I think this is standard BAPI msg.
    When same BAPI is run in Foreground mode, it gives "Confirmation Prompt" , 'Order 1000068 : There are still reprocessing records . Set order to Technically Complete" with YES/NO/Cancel option. You can save TECO by selecting YES
    Is there a way to achieve this in Background mode.
    Thank you much in advance for help,
    Regards,
    Jatin

    Hello Jatin,
    Call function DIALOG_SET_NO_DIALOG before the BAPI, then system handles the BAPI as in the non-dialog mode and the pop-up does not appear
    Refer KBA 1986661 - PP-SFC: BAPI_PRODORD_COMPLETE_TECH Popup
    Best Regards,
    R.Brahmankar

  • Post Processing List

    Hello all,
    I have a question related to the Post Processing List (transaction MF47), where they appear the backlogs resulted from backflush.
    If, by mistake, I deleted some postrocessing records (instead of processing them), how could I see later these deleted records? And how could I process them later on, if they need to be processed? --> do it exists an "undo", related to the action of deleting postprocessing records?
    Thank you very much,
    Edith

    Dear Catalin Ignat,
    Once if the Post Processing Record is deleted in MF47,
    in no way you can fetch the data back.
    It will not show either in AFFW or AUFM or COGI.
    SAP has given a temporary means to use this Reprocessing allowed in REM
    Profile.
    So once deleted unknowingly its lost.
    Reward points and close the thread.
    Regards
    Mangal

  • Post processing errors in REM

    Hi Gurus,
    What is significance of post processing in REM?
    We have 4 options.
    1)     MF45 u2013 Post process Individual
    2)     MF46 u2013 Post process Collective
    3)     MF47 u2013 Post processing list
    4)     COGI u2013 Post processing individual Components
    I am able to use COGI and clear errors occurred in back flushing.
    But cannot under above first 3 transactions.
    System is taking to all materials where correction is needed for eg in MF47. When I select a material and change post processing record, taking me to screen of post processing list of component.
    Here I observe that bell , Batch determination and Stock determination are disabled. Where as these are enabled in COGI. What can be the reason behind? Any settings missing in REM?
    Pl. help me.
    Srini

    Hi Srini,
    To avoid this you can confirm the following things.
    1.Ensure that suffecicient stock is available in the backflushing locations of each materail.
    2.To prevent the generation of "post-processing list" you can block the BOM from getting backflushed if sufficient stock is not available in specified location for the BOM COMPONENTS. To do this change the "REM profile" to "002"in MRP4 view for the BOM MATERIAL.(if the profile is
    not available you can crate it through SPRO).
    3. To clear the existing Backlog use transcation MF47 and re-process(be ensure that required stocks is
    available for each bom component)
    Hope it will solve ur problem.
    Regards,
    R.Brahmankar

  • Post Processing Backflushing Backlogs

    Hi All,
    When I wanted to reprocess backflushing backlogs with MF45, I can not do any changes such as changing storage Location or batch number and etc because appeared post processing list  is read only. How I can change this screen from display mode to Change?
    Regards,
    Fateme Goudarzi

    Dear Fateme Goudarzi,
    1) just check ur REM profile in OSP2
    2) under Error creation for Backflushing there are two options
    Option A ) Create cumulative post processing records
    Option B ) Also create individual post processing records
    3) if option A is ticked , then Go to MF47, here u can change the Fields or clear the records
    4)  if option B is ticked, hen Go to COGI , here u can change the Fields or clear the records
    Regards
    Madhu Kumar

  • CIF Post Processing

    Dear All,
    In the CIF post processing, either via CIF Cockpit or via other post processing transactions, in APO or ECC, we need to go into each post processing record to find the reason for the post processing record being created and to check the status for Error / Warning.
    Is there a way to view all the post processing records at APO or ECC system level?
    Is there any method to analyze the reason for the post processing records, other than going into each one of them to find out?
    I saw in the web of the existence of a program, where on adding some simple coding to the program, a report can be created to view the 'reason for post processing record' being created.
    Regards,
    Sridhar R

    Hi Senthil,
    Thanks for the reply.
    The alerts system is already in place.
    We are looking at a report where on execution, it would give the reason for the post processing record / error creation.
    Based on the report corrective action could be taken, by appropriately notifying the respective teams / users in corresponding plants / regions.
    Regards,
    Sridhar R

  • Post Process Event Handler ----Unique Constraint Violation--Create User

    Hi Evryone..
    I am creating the user using the create user request template and there is one level of approval for this.
    I have one pre process event handler which populates one field A and one post process event handler which updates some 3 fields in the user form .
    In request template itself we had placed value as "ABC" for field B and this field B will be overridden in the post process event handler with Value as "XYZ" .
    Now when i raise the request the user is getting created in OIM but the value XYZ is not getting replaced in the field B.
    Below are the errors which i got in the logs while executing post process event handler :
    <Mar 28, 2012 10:25:58 AM CDT> <Warning> <oracle.iam.callbacks.common> <IAM-2030146> <[CALLBACKMSG] Are applicable policies present for this async eventhandler ? : false>
    <Mar 28, 2012 10:25:59 AM CDT> <Warning> <org.eclipse.persistence.session.oim> <BEA-000000> <
    Exception [EclipseLink-4002] (Eclipse Persistence Services - 2.1.3.v20110304-r9073): org.eclipse.persistence.exceptions.DatabaseException
    Internal Exception: java.sql.SQLIntegrityConstraintViolationException: ORA-00001: unique constraint (DEV_OIM.UK_UAR_ATTR_NAME_VALUE) violated
    Error Code: 1
    Call: INSERT INTO USR_ATTRIBUTE_RESERVATIONS (UAR_RESERVATION_KEY, UAR_ATTRIBUTE_NAME, CREATED_BY, CREATED_ON, DATA_LEVEL, UAR_REQUEST_ID, UAR_RESERVED_VALUE, UPDATED_BY, UPDATED_ON) VALUES (?, ?, ?, ?, ?, ?, ?, ?, ?)
    bind => [10, User Login, null, null, null, 10, DUMMY14, null, null]
    Query: InsertObjectQuery([email protected]658269)
    Exception at usrIntf.updateUser IAM-3050128 : Cannot reserve user attribute User Login with value DUMMY14 in OIMDB. Corresponding request ID is 10.:User Login:DUMMY14:10
    I checked reservations table and there are no records in that table.
    Has any one faced this issue..if so how it can be resolved..

    Are you trying to update the User ID? As far as I know during create user requests, OIM reserves the user login as it goes through approval and you cannot update that directly I think. I haven't tried it but can you tell me which fields you are prepopulating and which are you updating? Are there any OOTB fields in this or UDF?
    -Bikash

  • Post Process Event Handler not getting user's CURRENT_STATE for a UDF field

    I have a post process event handler in OIM R2 BP04 , which runs on Trusted Reconciliation and it compares user's ("CURRENT_USER") state and ("NEW_USER_STATE") and based on that derives a business logic.
    The problem that i am  facing is that, it is not able to get the User's ("CURRENT_USER") state for a UDF(EMAIL_LIST) field and it is coming as null,and  hence is breaking the business logic.The same Event Handler is working on TEST and QA ( 4 node cluster)environments and is not working on PROD environment( 4 node cluster).
    The different thing that was done on was that during the initial recon the event Handler was not present  and after the initial load of the users i have manually executed the database sql query which have updated the "EMAIL_LIST field manually for all the users
    I think  that since during the initial recon as the  EMAIL_LIST was not populated and was  populated  through the SQL update for all the users, the orchestration inter event data does not contain email list, and so it is coming as null.
    But i am seeing the same behavior for new records as well, which are created and  then updated after event handler is registered.
    Please reply, if you have encountered something similar.
    Thnx
    Akshat

    Yes i need the old state, which is
                        Identity[] oldUserStatesIdntArr =
                            (Identity[])(Identity[])interEventData.get("CURRENT_USER");

  • Issue with Bulk Load Post Process

    Hi,
    I ran bulk load command line utility to create users in OIM. I had 5 records in my csv file. Out of which 2 users were successfully created in OIM and for rest i got exception because users already existed. After that if i run bulk load post process for LDAP sync and generate the password and send notification. It is not working even for successfully created users. Ideally it should sync successfully created users. However if there is no exception i during bulk load command line utility then LDAP sync work fine through bulk load post process.Any idea how to resolve this issue and sync the user in OID which were successfully created. Urgent help would be appreciated.

    The scheduled task carries out post-processing activities on the users imported through the bulk load utility.

  • ECC 5.0 database instance install error in  database load (post processing)

    HI  all!
    The installation of the environment :
    VM SERVER v1.03
    ECC 5.0 /WINDOWS SERVER 2003
    SQL SERVER 2000 SP3
    DB INSTANCE install occur error in database load (post processing)  step:
    sapinst.log:
    INFO 2008-10-16 14:20:54
    Creating file C:\Program Files\sapinst_instdir\ECC_50_ABAP_NUC\DB\keydb.5.xml.
    ERROR 2008-10-16 14:22:29
    MSC-01015 Process finished with error(s), check log file C:\Program Files\sapinst_instdir\ECC_50_ABAP_NUC\DB/SAPSSEXC.log
    ERROR 2008-10-16 14:24:59
    MSC-01015 Process finished with error(s), check log file C:\Program Files\sapinst_instdir\ECC_50_ABAP_NUC\DB/SAPAPPL2.log
    ERROR 2008-10-16 14:24:59
    MSC-01015 Process finished with error(s), check log file C:\Program Files\sapinst_instdir\ECC_50_ABAP_NUC\DB/SAPSDIC.log
    ERROR 2008-10-16 14:26:30
    MSC-01015 Process finished with error(s), check log file C:\Program Files\sapinst_instdir\ECC_50_ABAP_NUC\DB/SAPPOOL.log
    SAPSSEXC.log:
    D:\usr\sap\IDS\SYS\exe\run/R3load.exe: START OF LOG: 20081016142100
    D:\usr\sap\IDS\SYS\exe\run/R3load.exe: sccsid @(#) $Id: //bas/640_REL/src/R3ld/R3load/R3ldmain.c#27 $ SAP
    D:\usr\sap\IDS\SYS\exe\run/R3load.exe: version R6.40/V1.4
    Compiled Aug 18 2008 23:28:15
    D:\usr\sap\IDS\SYS\exe\run/R3load.exe -dbcodepage 1100 -i C:\Program Files\sapinst_instdir\ECC_50_ABAP_NUC\DB/SAPSSEXC.cmd -l C:\Program Files\sapinst_instdir\ECC_50_ABAP_NUC\DB/SAPSSEXC.log -loadprocedure fast
    (DB) INFO: connected to DB
    (DB) INFO: D010INC deleted/truncated
    Interface access functions from dynamic library dbmssslib.dll loaded.
    (IMP) INFO: EndFastLoad failed with <2: Bulk-copy commit unsuccessful:3624 Location: rowset.cpp:5001
    Expression: pBind->FCheckForNull ()
    SPID: 54
    Process ID: 1120
    3624 >
    (IMP) ERROR: EndFastload: rc = 2
    (DB) INFO: D010TAB deleted/truncated
    SAPAPPL2.log
    D:\usr\sap\IDS\SYS\exe\run/R3load.exe: START OF LOG: 20081016142130
    D:\usr\sap\IDS\SYS\exe\run/R3load.exe: sccsid @(#) $Id: //bas/640_REL/src/R3ld/R3load/R3ldmain.c#27 $ SAP
    D:\usr\sap\IDS\SYS\exe\run/R3load.exe: version R6.40/V1.4
    Compiled Aug 18 2008 23:28:15
    D:\usr\sap\IDS\SYS\exe\run/R3load.exe -dbcodepage 1100 -i C:\Program Files\sapinst_instdir\ECC_50_ABAP_NUC\DB/SAPAPPL2.cmd -l C:\Program Files\sapinst_instdir\ECC_50_ABAP_NUC\DB/SAPAPPL2.log -loadprocedure fast
    (DB) INFO: connected to DB
    (DB) INFO: AGR_1250 deleted/truncated
    Interface access functions from dynamic library dbmssslib.dll loaded.
    (IMP) INFO: import of AGR_1250 completed (139232 rows) #20081016142200
    (DB) INFO: AGR_1251 deleted/truncated
    (IMP) INFO: EndFastLoad failed with <2: Bulk-copy commit unsuccessful:2627 Violation of PRIMARY KEY constraint 'AGR_1251~0'. Cannot insert duplicate key in object 'AGR_1251'.
    3621 The statement has been terminated.>
    (IMP) ERROR: EndFastload: rc = 2
    (DB) INFO: AQLTS deleted/truncated
    (IMP) INFO: import of AQLTS completed (17526 rows) #20081016142456
    (DB) INFO: disconnected from DB
    D:\usr\sap\IDS\SYS\exe\run/R3load.exe: job finished with 1 error(s)
    D:\usr\sap\IDS\SYS\exe\run/R3load.exe: END OF LOG: 20081016142456
    SAPSDIC.log:
    D:\usr\sap\IDS\SYS\exe\run/R3load.exe: START OF LOG: 20081016142230
    D:\usr\sap\IDS\SYS\exe\run/R3load.exe: sccsid @(#) $Id: //bas/640_REL/src/R3ld/R3load/R3ldmain.c#27 $ SAP
    D:\usr\sap\IDS\SYS\exe\run/R3load.exe: version R6.40/V1.4
    Compiled Aug 18 2008 23:28:15
    D:\usr\sap\IDS\SYS\exe\run/R3load.exe -dbcodepage 1100 -i C:\Program Files\sapinst_instdir\ECC_50_ABAP_NUC\DB/SAPSDIC.cmd -l C:\Program Files\sapinst_instdir\ECC_50_ABAP_NUC\DB/SAPSDIC.log -loadprocedure fast
    (DB) INFO: connected to DB
    (DB) INFO: DD08L deleted/truncated
    Interface access functions from dynamic library dbmssslib.dll loaded.
    (IMP) INFO: ExeFastLoad failed with <2: BCP Commit failed:3624 Location: p:\sql\ntdbms\storeng\drs\include\record.inl:1447
    Expression: m_SizeRec > 0 && m_SizeRec <= MAXDATAROW
    SPID: 54
    Process ID: 1120
    3624 >
    (IMP) ERROR: ExeFastload: rc = 2
    (DB) INFO: disconnected from DB
    D:\usr\sap\IDS\SYS\exe\run/R3load.exe: job finished with 1 error(s)
    D:\usr\sap\IDS\SYS\exe\run/R3load.exe: END OF LOG: 20081016142437
    SAPPOOL.log:
    D:\usr\sap\IDS\SYS\exe\run/R3load.exe: START OF LOG: 20081016142530
    D:\usr\sap\IDS\SYS\exe\run/R3load.exe: sccsid @(#) $Id: //bas/640_REL/src/R3ld/R3load/R3ldmain.c#27 $ SAP
    D:\usr\sap\IDS\SYS\exe\run/R3load.exe: version R6.40/V1.4
    Compiled Aug 18 2008 23:28:15
    D:\usr\sap\IDS\SYS\exe\run/R3load.exe -dbcodepage 1100 -i C:\Program Files\sapinst_instdir\ECC_50_ABAP_NUC\DB/SAPPOOL.cmd -l C:\Program Files\sapinst_instdir\ECC_50_ABAP_NUC\DB/SAPPOOL.log -loadprocedure fast
    (DB) INFO: connected to DB
    (DB) INFO: ATAB deleted/truncated
    Interface access functions from dynamic library dbmssslib.dll loaded.
    (IMP) INFO: ExeFastLoad failed with <2: BCP Commit failed:3624 Location: p:\sql\ntdbms\storeng\drs\include\record.inl:1447
    Expression: m_SizeRec > 0 && m_SizeRec <= MAXDATAROW
    SPID: 54
    Process ID: 1120
    3624 >
    (IMP) ERROR: ExeFastload: rc = 2
    (DB) INFO: disconnected from DB
    D:\usr\sap\IDS\SYS\exe\run/R3load.exe: job finished with 1 error(s)
    D:\usr\sap\IDS\SYS\exe\run/R3load.exe: END OF LOG: 20081016142621
    please help!   thanks!

    VM SERVER v1.03
    can you please tell me what you mean with that? VMWare? Is that ESX-Server?
    (DB) INFO: connected to DB
    (DB) INFO: ATAB deleted/truncated
    Interface access functions from dynamic library dbmssslib.dll loaded.
    (IMP) INFO: ExeFastLoad failed with <2: BCP Commit failed:3624 Location: p:sql
    tdbmsstorengdrsinclude
    ecord.inl:1447
    Expression: m_SizeRec > 0 && m_SizeRec <= MAXDATAROW
    SPID: 54
    Process ID: 1120
    3624 >
    (IMP) ERROR: ExeFastload: rc = 2
    I´ve seen those load problems under VMWare and on boxes hosted as Xen VMs.
    Markus

  • BPS Post Processing Question

    Dear All,
    We have an EXIT function which resides in one planning level and the planning layout in another planning level.
    The EXIT function is calculating the numbers correctly which creates internal table XTH_DATA and passes it to function module UPF_PARAM_EXECUTE. When we check XTH_DATA, the data records are as we need the records to be generated.
    The issue we are having is that, when the EXIT funciton has completed, the data in XTH_DATA looks perfect. However, the data saved to infocube is not exactly the same as it created. For example, the EXIT creates data for 2007/001 - 012. The resulting data in infocube contains many entries of fiscal period 2006001 through 2006006. The posting period is left blank from EXIT function. The resulting data has posting period of blank, 1 through 6, etc.
    We use request ID to diaplay data and makes sure the data source is correct.
    Planning level for EXIT function data model:
    Characteristics are:
    Assortment Type
    Base Unit
    Climate Zone
    Concept
    Department
    Distribution Channel
    Fiscal year
    Fiscal year/period
    Fiscal Year Variant
    Hierarchy ID
    Lifestyle
    Local currency
    Plng Area
    Posting period
    Sales Organization
    Version
    Volume Group
    Key Figures are:
    Balanced Receipts
    Number of Stores
    Stock Index
    Any helps, comments, and/or hints are greatly appreciated.
    Best regards,
    Sam

    Highly frustrating. I've got the IPlanetDirectoryPro SSO token being set, but the custom code to add additional domain cookies...those are not re-written by the cdservlet, and even if hard-coded for testing (I assume I can parse the domain/URI being requested by parsing the HttpServletRequest, but am just testing now) to the 'new domain,' they are sent but discarded by the browser.
    This is bad, as those custom cookies are required by several apps. Is there any documentation on writing a custom cdcservlet, sample code, code for the existing one, or any other means to do this?
    To be clear - 'basic' CDSSO seems to be working, if a request is made to a resource on Domain B, it directs to the AM host, which is in Domain A. The IPlanetDirectoryPro cookie is being set for Domains A and B in this case, and accepted by the browser. That setting I finally found in AMAgent.properties, here:
    com.sun.am.policy.agents.config.cookie.domain.list (in case this helps someone else in the future)
    However, I have post-process code implementing AMPostAuthProcessInterface which was setting custom cookies required by some apps, and I am unable to change the domain these are set in. More accurately, I can change the domain, and the data is sent, but the browser is then rejecting it, presumably as it's seeing it as a cross-domain cookie, and thus bad/discarding.
    This seems to only leave me with trying to use a custom cdcservlet, assuming I can find the existing code or similar to start with, as I have no idea how it's avoiding the cross domain cookie issue..
    Anyone?

  • Minimizing loss on post-processing of jpegs

    Today somehow the setting on my camera had raw file saving turned off. To try to minimize loss from post processing of the jpegs I assume I should save the edited files as PSD and develop my final pictures from that? Are there any other helpful tips someone can give for my predicament?

    KarlKrueger wrote:
    Today somehow the setting on my camera had raw file saving turned off. To try to minimize loss from post processing of the jpegs I assume I should save the edited files as PSD and develop my final pictures from that? Are there any other helpful tips someone can give for my predicament?
    Yes, save the edited files as PSD for further editing.
    Another good option is to open the jpegs in camera raw depending on your OS and Elements version.
    Do all you can edit in camera raw. All those edits will be calculated in 16 bits mode and in the internal wide range prophoto color space. No pixels will be changed, only the slider settings will be recorded in the metadata section : it's a non destructive workflow preserving your original, and you don't need to duplicate your file in another large size format if you simply click 'Done'. If you want to do further editing in the editor, then save your editor edits in a psd or tiff format version set. You can even open in the editor in 16 bits if you want for further global adjustments before you convert to 8 bits for layers or local tools (I don't : the situations where 16 bits is useful are dealt with in the ACR conversion).

  • CIF Post processing Log issue

    Hi Experts,
    We have implemented CIF Post procesing functionality ( Standard), we are into SCM 5.1.
    I am trying to check log in transaction /sapapo/cpp1 issue is :-
    1) When I select log for one record (before processing ) It says u201CNo log was found for these selection criteriau201D
    also observed that
    2)After re-processing record It appears as error queue but  it is  just saying u201C1 Post processing recordu201D posted and processing sign will be red in color but no where is showing the exact error.
    Can you please suggest what problem could be that I am not able to see exact log with error.
    Thanks in advace.
    Samir

    Hi,
    Please check in /SAPAPO/C2 have uou assined the both Logical sistems.
    Here you have to give
    1) R/3 Logical system, R/3 flag X, R/3 release (ex: 50 for ECC-5.0) Based on the size Inbound or Out bound Queue error             Handling 1 Postprocessing for Errors, No Splitting of LUWs and role not specified.
    2) APO logicasystem R/3 flag blank, APO release (ex: 41 for SCM-4.1) Based on the size Inbound or Out bound Queue error      Handling 1 Postprocessing for Errors, No Splitting of LUWs and role not specified.
    For selecting Inbound or Out bound Please read the bellow.
    When you enter a logical target system in transaction /SAPAPO/C2, the system suggests the queue type Outbound Queues as standard. This setting means that queue processing is controlled by the sending system (SAP APO). Problems can occur if the receiving system (SAP R/3) does not have enough free work processes for processing the queue.
    If you are using large amounts of integration data and expect that you may have insufficient free work processes, it may be useful to change to Inbound Queues. This setting means that queue processing is controlled by the receiving system (SAP R/3). If there are insufficient work processes available in the receiving system, the data is temporarily stored in SAP R/3 inbound and then processed gradually.
    But we are using Inbound.
    Hope this will help you.
    Regards,
    Kishore Reddy

Maybe you are looking for

  • How can i export pictures in PNG FILE

    Using adobe Lr 4.4

  • Solaris 10 not accepting Tarantella installation

    When running as sh ttaspso.shx 1Q) p_noetcshadow=Secure Global Desktop requires that your system is configured to use shadow passwords, storing username and password information in /etc/shadow. This system currently doesn't use shadow passwords. IMPO

  • Updating tables in report

    Post Author: JRKenney CA Forum: General We use Crystal Reports XI to write custom reports for our business's primary application. The reports are then imported into the application as Custom Reports and the application uses a runtime version of Cryst

  • Document transfer from Solution Manager into ARIS

    We have sucessfully transfer projects content from Solman into ARIS. Unfortunatelly links to documents cannot be found. To my knowledge links from General documentation answell as from Project documentation should be able to find in ARIS after the tr

  • Adobe Photoshop 9 Freezes in Mac OSX

    I have Adobe Photoshop Elements 9 that constantly freezes on my iMac OSX 10.7.2 platform. It happens while performing different tasks, ie. downloading pictures, e-mailing pictures, editing pictures. In fact the program freezes and then has to be forc