Avoid follow-up

Hi, experts
I have created a follow up task for sales document.
Im traying to avoid the follow-up document if sales document have a status CANCELLED.  The status was customizing into Status Management-->Define Status Profile for User Status. The status profile contain staus CAN (CANCELLED)  and User Status Triggers Business Transaction CANCEL with influence allowed. This status profiel was assigned to Sales Document. However while sales document is checked with status CANCELLED, this enable to create follow-up document, but a I don't like this.
I hope someone can help me.
Regards.
THX in advance.

Hi,
You first see where the followup button is defined it may be in GET_BUTTONS or DO_PREPARE_OUT and there you first access the status value based on that you can enable or disable the FOLLOUP buton like:
Data lv_enabled type boolean.
* First get the value of current status in lv_status from the context
IF lv_status <> 'CAN'.
    lv_enabled  = 'X'
ENDIF.
* Follow Up
    ls_button-text     = cl_wd_utilities=>get_otr_text_by_alias( 'CRM_UIU_BT/CREATE_FOLLOW_UP' ). "'Follow Up'.
    ls_button-on_click = 'FOLLOWUP'.                        "#EC NOTEXT
    ls_button-enabled  = lv_enabled.
    ls_button-page_id  = me->component_id.
    APPEND ls_button TO rt_buttons.
I hope this would resolve your issue. If yes please award points.
Regards
Ajay

Similar Messages

  • ACS 5.3.0.40 On-demand Full Backup failed.

    Hi,
    I have ACS 5.3.0.40 Primary Secondary Authenticators , of which the Scheduled backup has stopped.
    When checked the :
    Monitoring Configuration >
    System Operations >
    Data Management >
    Removal and Backup
    > Incremental Backup , it had changed to OFF mode. without any reason.
    The same was observed earlier too.
    I have made the
    Incremental Backup to ON and intiated the
    View Full Database Backup Now. But it wasn't successful and reported an Error:
    FullBackupOnDemand-Job Incremental Backup Utility System Fri Dec 28 11:56:57 IST 2012 Incremental Backup Failed: CARS_APP_BACKUP_FAILED : -404 : Application backup error Failed
    Later i did the acs stop/start  "view-jobmanager" and  initiated the On-demand Full Backup , but no luck, same error reported this time too.
    Has any one faced similar type of error /problem reported , please let me know the solution.
    Thanks & Regards.

    One other thing; if this does end up being an issue with disk space it is worth considering patch 5.3.0.40.6 or later since improves database management processes
    This cumulative patch includes fixes/enhancements related to disk management to avoid following issue
    CSCtz24314: ACS 5.x *still* runs out of disk space
    and also fix for
    CSCua51804: View backup fails   even when there is space in disk
    Following is taken from the readme for this patch
       The Monitoring and Reporting database can increase when as records are collected. There are two mechanisms to reduce this size and prevent it from exceeding the maximum limit.
    1. Purge: In this mechanism the data will be purged based on the configured data retention period or upon reaching the upper limit of the database.
    2. Compress: This mechanism frees up unused space in the database without deleting any records.
    Previously the compress option could only be run manually. In ACS 5.3 Patch 6 there are enhancements so it will run daily at a predefined time, automatically when specific criteria are met. Similarly by default purge job runs every day at 4 AM. In Patch 6 new option provided to do on demand purge as well.
    The new solution is to perform the Monitoring and Reporting database compress automatically.
    2.       New GUI option is provided to enable the Monitoring and Reporting database compress to run on every day at 5 AM. This can be configured under GUI Monitoring And Configuration -> System Operations -> Data Management -> Removal and Backup
    3.       Changed the upper and lower limit of purging of Monitoring and Reporting data. This is to make sure at lower limit itself ACS has enough space to take the backup. The maximum size allocated for monitoring and reporting database is 42% of /opt( 139 GB). The lower Limit at which ACS purges the data by taking the backup is 60% of maximum size Monitoring and Reporting database (83.42 GB). The upper limit at which ACS purges the data without taking backup is 80% of maximum size Monitoring and Reporting database (111.22 GB).
    4. The acsview-database compress operation stops all services till 5.3 patch 5 , now only Monitoring and Reporting related services are stopped during this operation.
    5. Provided “On demand purge” option in Monitoring and Reporting GUI. This option will not try to take any backup, it will purge the data based on window size configured.
    6. Even if the “Enable ACS View Database compress” option is not enabled in GUI then also automatic view database compress will be triggered if the physical size of Monitoring and Reporting database reached to the upper limit of its size.
    7. This automatic database compress takes place only when the “LogRecovery” feature is enabled, this is to make sure that the logging which happens during this operation will be recovered once this operation is completed. ACS generates alert when there is a need to do automatic database compress and also to enable this feature.
    8. Before enabling “LogRecovery” feature configure the Logging Categories in such way that only mandatory data to log into Local Log Target and Remote Log Target as Log collector under System Administration > ... > Configuration > Log Configuration
    This “LogRecovery” feature can recover the logs only if the logs are present under local log target.
    9       This automatic database compress operation also performed only when the difference between actual and physical size of Monitoring and Reporting database size is > 50GB.
    10 The new CLI “acsview” with option “show-dbsize” is provided to show the actual and physical size of the Monitoring and Reporting database. This is available in “acs-config” mode.
               acsview     show-dbsize     Show the actual and physical size of View DB and transaction log file

  • Automatic Creation of multiple Transportation Lanes

    Hi all.
    Is it possible to avoid following standard behaviour:
    Change of Supplying Plant in the Special Procurement Type
    If you have changed the supplying plant in the special procurement type in the ERP system, the SCM system creates a new transportation lane for the new combination of supplying plant (from special procurement type) and material/plant. The already existing transportation lane and the now invalid combination of supplying plant and plant in the ERP system is locked for orders in the SAP APO system. This can be seen from the lock indicator X that is set by the system.
    Automatic Creation of Transportation Lanes - Transportation Lane - SAP Library
    Our need for some SKU is to create automatically two T-Lanes. Considering user exit EXIT_SAPLCMAT_001, we added a new line on CT_CIF_MATLOC (and relative CT_CIF_MATLOCX) with different SUPPLPLANT, but the system obviously blocks (SPRKZ) the SKU on one of the two T-Lane.
    Thanks in advance for your support.

    Hi Mauro,
    In my opinion you have to create a program to delete the lock indicator. First you have to extract the information from tr.lanes, then change (delete X indicator) and save. This job should be scheduled before heuristic run.
    To extract data from tr.lanes you can use BAPI_TRLSRVAPS_GETLIST2. To change and save you can use BAPI_TRLSRVAPS_SAVEMULTI2.
    Hope that can help you!
    Thanks.
    Regards, Marius

  • Logical system name in SLD for the R3 buss. system is not open for input.??

    Dear all,
    We are upgrading to PI 7.1  and have a problem in a scenarie  where PI is sending IDOC to R3.
    The problem is probably related to the fact that it is not possible to write a logical system  name in SLD for the R3 system. I can not read the logical system name for the R3 system into the Directory (adapter specific identifier), *because the logical system for  the R3 bussines system is not an input file??*. How can i enter the logcal system name so i avoid following error:
      <?xml version="1.0" encoding="UTF-8" standalone="yes" ?>
    - <!--  Call Adapter
      -->
    - <SAP:Error xmlns:SAP="http://sap.com/xi/XI/Message/30" xmlns:SOAP="http://schemas.xmlsoap.org/soap/envelope/" SOAP:mustUnderstand="">
      <SAP:Category>XIAdapter</SAP:Category>
      <SAP:Code area="IDOC_ADAPTER">ATTRIBUTE_INV_RCV_SERV</SAP:Code>
      <SAP:P1 />
      <SAP:P2 />
      <SAP:P3 />
      <SAP:P4 />
      <SAP:AdditionalText />
      <SAP:Stack>Receiver service cannot be converted into an ALE logical system</SAP:Stack>
      <SAP:Retry>M</SAP:Retry>
      </SAP:Error>
    Regards Ugur

    There are two ways....
    Option 1 : Go to the BS of ur R3 system in Integration Direction. Go to edit mode mode and select Properties -> Adapter Specific attributes - > enter the LS name and save...
    Option2 : You can use it from the mapping.... U either hard code/use from source the EDIDC40 section of ur mapping and the in ur IDOC receiver config select the option user
    Option ! is the best soln...All the IDOC to that R3 system will go the specified LS...
    regards,
    Arvind R

  • Implement row-level security using Oracleu2019s Virtual Private Databases (VPD)

    Environment: Business Objects XI R2; Oracle 10g
    Functional Requirement:
    Implement row-level security using Oracleu2019s Virtual Private Databases (VPD) technology. The restriction is that the Business Objects Universe connection should use a generic/u201Capplicationu201D database user account. This will allow the organization to avoid the situation where the Business Objects password and the Oracle password need to be kept in synch.
    What do we need from the Business Objects support team?
    1.     Review the 2 attempted solutions that we have tried to implement
    2.     Propose solutions/answers to open questions for each of the attempted solutions
    3.     Propose any alternate solution that will help us implement the Function Requirement stated above
    Attempted Solution 1: Connection String uses Oracle Proxy User
    The connection string that is specified in the Universe is the following:
    app_user[end_user]/app_user_pwdarrobaDatabase.WORLD
    app_user = generic application user
    end_user = the oracle account of the end user which is set using arrobaVariable('BOUSER') app_user_pwd = password of the generic application user
    We have tried and implemented this in our test environment. However, we have some questions and concerns around how the connections are reused in a connection pool environment.
    Open Question for Solution 1:
    i. What happens when multiple proxy users try to connect on at the same time?  Business Objects shares the generic app_user connect string.  However, every user that logs on will have their own unique proxy user credentials.  Will there be any contention involved?  If so, what kind of errors can we expect?
    ii. If a user logs on using his credentials (proxy user), and business objects opens up a connection to the database using that user's credentials (as the proxy user but logging in through the generic app user). Then the user exits out --> based on our test today, it seems like the database connection remains open.  In that case, if another user logs on similarly with their credentials, will business objects simply assign the first users connection to that second user?  If so, then our security will not work.  Is there a way that Business Objects can somehow ensure that everytime we close a report, the connection is also terminated both at the BO and DB levels?
    iii. Our 3rd question is general high level -> How connection pooling works in general and how it is implemented in BO, i.e. how are new connections assigned, how are they recycled, how are they closed, etc.
    Attempted Solution 2: Using the ConnectInit parameter
    Reading through a couple of the Business Objects documents, it states that u201CUsing the ConnectInit parameter it is possible to send commands to the database when opening the session which can be used to set database specific parameters used for optimization.u201D
    Therefore, we tried to set the parameter in the Universe using several different options:
    ConnectInit = BEGIN SYSTEM.prc_logon('arrobaVARIABLE('BOUSER')'); COMMIT; END; ConnectInit = BEGIN DBMS_SESSION.SET_IDENTIFIER('arrobaVariable('BOUSER')'); COMMIT; END;
    Neither of the above iterations or any variation of that seemed to work. It seems that the variable is not being set or being u201Cexecutedu201D on the database.
    One of the Business Objects documents had stated that Patch ID 38, 977, 350 must be installed in our BO environments. We have verified that this patch has been applied on our system.
    Open Questions for Solution 2:
    How do we get the parameter ConnectInit to work? i.e. what is the proper syntax to enter and what other things do we need to check to get this to work.
    Note: Arroba word is being used instead of the symbol in order to avoid following error message:
    We are sorry but your message can not be posted since you have included an email address. Please remove the email address and re-post.

    the connectinit setting should look something like this:
    declare a date; begin vpd_setup('@VARIABLE('BOUSER')'); Commit; end;
    The vpd_setup procedure (in Oracle) should look like this:
    CREATE OR REPLACE procedure vpd_setup (p_user varchar)IS
    BEGIN
      DBMS_SESSION.set_vpd( 'SESSION_VALUES', 'USERID', p_user );
    END vpd_setup;
    Then you can retrieve the value of the context variable in your vpd functions
    and set the vpd.

  • One of the database is down in streaming environment

    I have scenario where one of the database in (two different database) streaming environment is unavailable due to nework/link/database maintenance purpose. Since my other database is up and running and DML/DDL operations are in process how can I avoid following error
    SQL> insert into test values ('eee', 'code e');
    1 row created.
    SQL> commit;
    commit
    ERROR at line 1:
    ORA-02050: transaction 5.23.8870 rolled back, some remote DBs may be in-doubt
    ORA-02068: following severe error from OTHER_DOWN_DATABASE
    ORA-03113: end-of-file on communication channel
    ============================================
    Can I change some parameters in DBMS_STREAMS_ADM Package or any other packages to avoid above situation

    Thanks for the reply.
    I am getting the same error even If I just do select on the table which I have for streaming
    SQL> select * from test@dbtest;
    select * from test@dbtest
    ERROR at line 1:
    ORA-02068: following severe error from DBTEST ---destination database
    ORA-03113: end-of-file on communication channel.
    Q- Wich database is down? Is it the source database or the destination database?
    For now I am testing with keeping down destination database.
    Q- If it's the source db, just stop the capture. That's all you really have to do.
    A- This can be good for planned maintenance, but how about unplanned outage or sudden problem (with networks/links or all of a sudden I can not stream my data changes to destination database).
    Q--The error you're describing is unclear.... Do you get that error immediately when you issue the insert and the commit? That sound very very strange to me because streams does asynchronous replication... You would not get such an error for db down from my opinion.
    A-- First time, I got this error when I updated a row and then tried to commit while my destination database was down.
    Q- It looks like the problem has nothing to do with streams but rather a distributed transaction problem (2pc)...
    A- If it is transaction problem should I be getting similar error while selecting from destination table, see the above error

  • How to put a border around locked text boxes?

    I have a number of locked text boxes. I need to place a border (a box) around them. How do I do that?

    Govan,
    1. It's helpful to give a full and complete description of your problem when first posting in order to get a complete answer and avoid follow up posts.
    2. There's no need post the same answer twice.
    My answer: The stroke/line around a text box is all or nothing. If you want to eliminate the line between adjacent text boxes you'll have to draw a white line over the offending line to cover it up. Once this is done, select both text boxes and the line (using shift-click or command-click) open the Arrange menu and choose Group (or press Option-Command-G). This will keep the white line in place if the text boxes move.
    Good luck,
    Terry

  • Assets PO filled in F-48

    Hi.
    I filled Purchasing Order number in F-48 in Purchasing Document field. For that i did neccesary settings in AO90 & other.
    Now enty passed
    Advance to Vendor A/c.. Dr Rs.50000
    Asset A/c... Dr Rs.50000
      To Bank A/c Rs.50000
      To Asset Clearing A/c Rs.50000
    I do not want to get it posted to Asset Account. It means Balance Sheet Asset is getting debited twice. I want to avoid following line items:
    Asset A/c... Dr Rs.50000
      To Asset Clearing A/c.
    How to do this.

    Hi Manikandan,
    For this problem dont go for validation.
    validation is not for amount.so you have to for resisting cretain amount for advance payment go create a planning level and T.code OBYR. there will give to planing level with special gl indicator.otherwise check tolerance for vendors .there you would resitict certain amounts for each vendor
    May be this information is useful to you.
    Regards
    Surya
    Edited by: surya naveen on Aug 1, 2008 8:34 AM

  • "Common" questions thread/board/whatever

    I have been seeing so many repeat questions lately that really are very common. How to update control values programmatically and how to pass data between parallel loops to name the two I see most.
    I have a suggestion that we create a common questions thread that is put together with links to common forum questions. This can be a sticky, or maybe force it to come up at the beginning of every search no matter the key words put in. I'm obviously open to other suggestions and I'm posting this because there would need to be other things resolved (can all members post to the thread or just NI staff so we avoid follow up questions? Which links to common question responses would be added to the thread? How do we determine the BEST answers? etc.) However, I think it would be a nice place to point users to, rather than to have to go dig up an old thread and point the users to it ourselves, when it's really what they should have looked for in the first place. We could just post a link to the "common questions thread"  
    This would save ben from posting his AE nugget link (which he probably has memorized) every day, or people repeatedly posting "try a producer-consumer queued state machine - see NI template that is included with LabVIEW"...and the list goes on.
    Any comments on how to implement this, or why it's just a bad idea in general?
    Message Edited by for(imstuck) on 06-03-2010 10:15 AM
    CLA, LabVIEW Versions 2010-2013

    JackDunaway wrote:
    Ben wrote:
    The serahes can be improved by US !
    Tags are included in the serach. If someone asks about a "niad pulse converter" without mentioning it by name, the search engine could have trouble. But if we know about the thread where a conversation took place, we can add tags to that thread with words that appeared in their original Q. The next time some asks, the tags will show in their search.
    TAGS have the potential of codifying YOUR expertise in such a way  that a novice can stand on your foundation.
    Ben
    Ben, good plug for tagging - there's one problem though. You're by far the most diligent tagger, but most of your tags are hyphenated or underscored on the compound words. For instance, if I search for "Converting array to cluster", I'm guessing the search engine would not return the result that you have tagged "ConvertingArray-to-Cluster". Or am I wrong here - does the search engine throw out the hyphens and underscores?
    A Quick search from my Forum_Tips tag yielded this  and I quote"
    Ben wrote
    The search
    engine is smart enough to ignore
    underscores. So by constructing tags using an underscore between key
    words let you create a connection between them.
    All tags
    (a text string delimited by spaces before and after) are counted and a
    connection made when they are included for the same post.
    So
    by
    constructing hierachial names and grouping them in a sinlge post you
    create structure that if repeated, will create your own outline.
    I have to Say, I'm liking tags more and more ! !   
    Message Edited by Jeff Bohrer on 06-08-2010 05:10 PM
    Jeff

  • Error in X11 (unable to start device PNG)

    Hello.
    I have a problem with running embedded R script.
    First script runs fine:
    begin
    sys.rqScriptCreate('Example1',
    'function() {
    ID <- 1:10
    res <- data.frame(ID = ID, RES = ID / 100)
    res}');
    end;
    The second one is not so fine:
    select *
    from table(rqEval(NULL,
    'select 1 id, 1 res from dual',
    'Example1'));
    ORA-29400: data cartridge error
    Error in X11(paste("png::", filename, sep = ""), width, height, pointsize, :
    unable to start device PNG
    ORA-06512: на "RQSYS.RQEVALIMPL", line 57
    29400. 00000 - "data cartridge error\n%s"
    Can anyone help me?
    Thanks.

    SELECT PVALUE
    FROM TABLE(rqEval(
    CURSOR(SELECT 1 "ore.connect", 0 "ore.graphics" FROM DUAL),
    'select 1 pValue from dual',
    'RF_TTEST_PVALUES'));Hi,
    Thats is very good. Is it a way to do it in R client to avoid following error?
    Example:
    ORE> library(ORE)
    ORE> ore.connect(user="rquser", password="pass", sid="TEST", host="r2151", port=1525, all=TRUE)
    Error in .oci.GetQuery(conn, statement, data = data, prefetch = prefetch, :
    Error in try({ : ORA-20000: RQuery error
    Error in X11(paste("png::", filename, sep = ""), g$width, g$height, pointsize, :
    unable to start device PNG
    ORA-06512: at "RQSYS.RQEVALIMPL", line 104
    ORA-06512: at "RQSYS.RQEVALIMPL", line 101
    Regards,
    Max

  • Do I must reboot my System?

    My development environment is
    - Model - Apap
    - UI       - Webdynpro for java
    Do I must reboot my System to apply Apap fild that is modified or inserted something?
    Now I'm, every time, reboot my system after reimport Apap model.
    Reboot needs too much time.
    If I not to do, web server appers 500 error.
    Is there any other a manner to apply modifiy or inserted Apap filed?

    Hi
    If you are making any changes in back end system or there is any change in rfc structure, then it is must to re-import your model.
    Once you re-imported the model you need to restart your server to make the changes become effective.
    To avoid following this procedure, you have another option to delete and re-build model again.But for this you have to change your application logic. You can also comment code related to model and rebuild new enhanced model and use it.(May not be recommended but for small applications it is effective)
    Mandeep Virk

  • How to store and refresh cache in CPO 2.3?

                       We have some dozen workflows (CPO 2.3 processes) each triggered to start by a unique tidal event. Each workflow when triggered fetches numeric data (just two or three two-digit data such as 15, 300) from a singluar web service  end point and uses the same in its workflow logic. Data fetched by each workflow is usually specific to that workflow. We want the workflow to cache locally the data thus fetched. Reason for caching data locally in CPO: The rate of web service lookups is 100+ lookups per second and the web service cannot scale beyond 100 lookups per second. The data thus cached will be local to that workflow.
    QUESTIONS PLEASE:
    1. What is the best way to store the fetched data? Is it using local variables or global variables or some other storage mechanism?
    2. Should the data thus cached be fetched each time the process (workflow) is re-started?
    3. When the data is changed at the web service, the web service can send fresh data back to CPO to refresh the previously cached data. What is the best way to receive this data in order to refresh the local cache in CPO while the worklfow is running and operational [(i.e) without disrupting the workflow's running state]?
    Will appreciate detailed answers please to avoid follow up questions.
    thanks in advance,
    Jamal

    Thanks to Von and Mike for responding. I have summarized your responses to my specific questions. Please see if my summary/understanding of your responses is right.
    QUESTIONS:
    1) Can the PO process create a global table variable containing say 20,000 entries? If not, what's the max limit?
    RESPONSE: There is no max limit to the number of rows in a table variable (global or local).
    2) Assuming the PO process can build a global table variable containing 20,000 entries, will a lookup for ThresholdData given a DeviceIPAddress be VERY SLOW making the PO process inefficient?
    RESPONSE: select statement to query a table it's pretty fast.  If you looped trying to find a match, it would be incredibly slow.
    - Let's say at time Tn, the number of entries in the global table variable is 20,000.
    - Let's say at time Tn+1, the ThresholdData for 8000 (of the 20,000) devices got changed in the web service DB.
    QUESTIONS:
    3) Can the web service send ThresholdData for 8000 devices using a table variable of type INPUT? I did not see a table variable for type INPUT supported by a PO process and hence this question.
    RESPONSE: wrap table data in CDATA tags in XML, then read it into a table once it gets into the process.
    Now to answer some of your questions:
    1. We are planning on using CPO 2.3 to act as a Real time Alert Notification tool to send email notifications to our NoC Operations personnel. We would like to send a notification within 3-5 mins of TCA generation (yes - TCAs come from a performance management tool such as InfoVista).
    2. Tidal's workflow is expected to do some validation on a received TCA, before deciding to send a email notification. For example, Tidal will wait for 15 minutes upon receiving a MAJOR/CRITICAL TCA to see if a CLEAR arrives for that TCA within that 15 minutes interval. If so, Tidal  workflow will generate a mail notification, if not, it will do nothing. Like this, we have many HIGHER LEVEL VALIDATION RULES that CANNOT BE PERFORMED AT PERFORMANCE MGR LEVEL. They can only be performed at the level of tidal that supports easy to write rules GUI and easy to change the rules in a matter of minutes.
    3. No - we do not have 50,000 processes. We will have utmost 25 processes (workflows) - one for each TCA type coming out of InfoVista. Each process will know how to treat a TCA type before generating an email notification.
    4. The number 50,000 corresponds to Devices in our Network. The validation rules that will be implemented in tidal workflow shall use data such as the 15 minutes mentioned above. An external DB shall maintain the list of 50K devices and the 'time-to-wait-for-CLEAR' data for each of the 50 K devices. Like 'time-to-wait-for-CLEAR' data, we have other data for other validation rules that will be implemented using tidal workflow for the various TCAs that we intend to support. This DB shall be exposed to Tidal workflow via a Web Services API.
    Hope our use case is now clear. If so, please do add your 2c that will help answer my original questions. I can also go on a call to explain our use case. If you think I should engage with your services team, please send me their contact details and I will do so.
    Thanks for all the support.

  • Required help in setting sqlplus environment

    Hi,
    I want some hint regarding settigns of sqlplus environment.
    Following is script i run using unix .
    --------------------------------------------------test.sql-----------------
    set serveroutput on
    set feedback off
    DECLARE
    min_id NUMBER:=&1;
    max_id NUMBER :=&2;
    student_id NUMBER := &3;
    BEGIN
    DBMS_OUTPUT.PUT_LINE('some usefull command');
    END;
    EXIT;
    -------------------------------------End test.sql-------------------------------
    I use following command to run this script.
    sqlplus -s user/password@database @test.sql 10001 20000 5 > out.log
    After running this command i get following output in out.log
    old 3: min_id NUMBER:=&1;
    new 3: min_id NUMBER:=10001;
    old 4: max_id NUMBER :=&2;
    new 4: max_id NUMBER :=20000;
    old 5: student_id NUMBER := &3;
    new 5: student_id NUMBER :=5;
    'some usefull command'
    My doubt is is there any way to suppress the junk created by sqlplus.
    How to avoid following statement to be written in output.
    old 3: min_id NUMBER:=&1;
    new 3: min_id NUMBER:=10001;
    old 4: max_id NUMBER :=&2;
    new 4: max_id NUMBER :=20000;
    old 5: student_id NUMBER := &3;
    new 5: student_id NUMBER :=5;
    Thanks
    Rajiv

    I am not sure what the intent of this is, or in what version you intend to utilize it, but this doesn't make any sense: At least as it is written.
    The SQL*Plus environment should be set in glogin.sql and login.sql.
    What you've posted, by way of examples, looks like schoolwork. Perhaps you can clarify if using the proper configuration files does not do what you want.

  • LMS4.2.2 crashing UTping.exe with iphlpapi.DLL_unloaded

    Hello,
    What shall we do to avoid following situation? Are there any patches out?
    Application Error: Faulting application name: UTPing.exe, version: 0.0.0.0, time stamp: 0x4f283363 Faulting module name: iphlpapi.DLL_unloaded, version: 0.0.0.0, time stamp: 0x4ce7b859 Exception code: 0xc0000005 Fault offset: 0x74fb870b Faulting process id: 0x5128 Faulting application start time: 0x01ce005a868f8e9b Faulting application path: C:\CSCOpx
    thx in advance for hints, Steffen

    Hello,
    What shall we do to avoid following situation? Are there any patches out?
    Application Error: Faulting application name: UTPing.exe, version: 0.0.0.0, time stamp: 0x4f283363 Faulting module name: iphlpapi.DLL_unloaded, version: 0.0.0.0, time stamp: 0x4ce7b859 Exception code: 0xc0000005 Fault offset: 0x74fb870b Faulting process id: 0x5128 Faulting application start time: 0x01ce005a868f8e9b Faulting application path: C:\CSCOpx
    thx in advance for hints, Steffen

  • How to put boxes around text

    Hi 
    I have a section of text that i want to put a box around as an outline
    I managed to do this using   the inspector - text - more - border and rules.
    everything looks great in the authoring tool in both orientations. 
    However, When I preview the book on the ipad the boxes are perfect in landscape but the showv outlines for each line of text that ends with a carriage return.
    Is this a bug with the software...
    Mark..

    Govan,
    1. It's helpful to give a full and complete description of your problem when first posting in order to get a complete answer and avoid follow up posts.
    2. There's no need post the same answer twice.
    My answer: The stroke/line around a text box is all or nothing. If you want to eliminate the line between adjacent text boxes you'll have to draw a white line over the offending line to cover it up. Once this is done, select both text boxes and the line (using shift-click or command-click) open the Arrange menu and choose Group (or press Option-Command-G). This will keep the white line in place if the text boxes move.
    Good luck,
    Terry

Maybe you are looking for