LOV creation problem

I'm having a problem with creating some dynamic LOV's. It appears that I can only create an LOV against a table that has only two columns. When I try to create one against a table with three or more columns, even though I'm only selecting two columns, I get the "LOV query is invalid, a display and a return value are needed, the column names need to be different." error message.
As a quick test, using the scott.emp table, I was unable to create a LOV against it. I then made a view called emptest consisting of the ename and empno fields, and then I was able to create an LOV.
This suffices as a last resort workaround, but any idea on when this functionality might be fixed, so people can aviod the extra step?
One of the queries I've tried to create an LOV against a table with three columns is:
select affil d, afn_code r
from lu_affiliation
order by 1
This just gives me the error message listed above. However, if I create a view first with just those two columns, and then create the LOV against the view, then it works.
Thanks

Yes, I've double-checked, and I am am using the D and R alias'. Perhaps it may have been caused by installing HTML DB after installing the database control and following the instructions (alerts) about revoking execution fom PUBLIC on utl_file and a few others {?}. I noticed later that it caused some problems with other packages becoming invalid, so I re-granted execution to those and re-complied the invalid packages. But, I did do all this before playing around with HTML DB.
Yesterday I played around all day experimenting, and the only way I could create a LOV was against an object (table or view) with only two columns.
Thanks for any assistance or clarification, though I've found a cludgy workaround.

Similar Messages

  • Designing LOV Query Problem

    Hello APEX people,
    I posted my problem here:
    Designing LOV Query Problem
    What I have is a sequence like this:
    CREATE SEQUENCE
    DR_SEQ_FIRST_SCHEDULE_GROUP
    MINVALUE 1 MAXVALUE 7 INCREMENT BY 1 START WITH 1
    CACHE 6 ORDER CYCLE ;
    What I need would be a SQL query returning all possible values oft my sequence like:
    1
    2
    3
    4
    5
    6
    7
    I want to use it as a source for a LOV...
    The reason why I use the cycling sequence is: My app uses it to cycle scheduling priorities every month to groups identified by this number (1-7).
    In the Admin Form, I want to restrict the assignment in a user friendly way - a LOV.
    Thanks
    Johann

    Here ist the solution (posted by michales in the PL/SQL forum):
    SQL> CREATE SEQUENCE
    dr_seq_first_schedule_group
    MINVALUE 1 MAXVALUE 7 INCREMENT BY 1 START WITH 1
    CACHE 6 ORDER CYCLE
    Sequence created.
    SQL> SELECT LEVEL sn
    FROM DUAL
    CONNECT BY LEVEL <= (SELECT max_value
    FROM user_sequences
    WHERE sequence_name = 'DR_SEQ_FIRST_SCHEDULE_GROUP')
    SN
    1
    2
    3
    4
    5
    6
    7
    7 rows selected.

  • ER BC4J Automatic LOV Creation

    I believe it is easy to extend the already defined JDev BC4J structures to implement a new feature,"Automatic LOV Creation", which will help developers to define LOVs based on foreign keys in an easy and comprehensive manner.
    Lets assume we have two Entity Object A and B. Entity Object A references Entity Object B through a foreign key.
    The developer would like to create a View Object which includes references to Entity Object B.
    Currently the developer chooses in the View Object definition wizard first Entity Object A as "Updatable" and then adds Entity Object B beneath it as "Reference".
    Let us assume there is another option named "Create LOV" which activates when the "Reference" option is selected.
    The developer selects the "Create LOV" option and then proceeds to include attributes from Entity Object B which will participate in the View Object the same way s/he does now. When the VO is created it will auomatically include a LOV having as attributes the selected attributes from the Entity Object B which participate in the new VO and the LOV attribute values will map to the corresponding values of the newly created View Object.
    I strongly believe the automatic creation of LOVs dealing with foreign keys is a very useful feature which is easy to implement, it will "sell" well and is worth the investment.
    Best Regards
    Elias.

    The very nice example you are referring to is not appropriate. In the case of a cascading LOV we will have to perform the steps of the example.
    I would like to change the title of ER to "Automatic LOV Declaration" because I believe it is more appropriate.
    I will outline an example by using the HR schema borrowing the terminology from the example you provided.
    Create a "vanilla" ADF BC project based on Countries and Regions and update the CountriesVO to display Region_Name from the Regions Entity.
    The attributes selected for the VO are the following:
    CountryId
    CountryName
    RegionId
    RegionId1 (I set this attribute as hidden)
    RegionName
    Notice at the time we select the Regions Entity the Reference indication is set.
    So after we select attributes from the Regions Entity the JDev Engine knows the following:
    1. It is supposed to retrieve data from another Entity (Regions), therefore it knows the LOV VO --> So it can Declare the LOV for us.
    2. The RegionId of the Countries VO receives values from the Regid of the Entity (Regions) --> So it can link the LOV return value
    3. The attribute RegionName which belongs to another Entity (Regions) is displayed --> So it can add the attribute to be displayed in the LOV VO
    In the case of several "Reference" Entities the JDev Engine would create several LOVs, one for each referenced Entity.
    Best Regards
    Elias.

  • Database creation problem on Windows XP

    Hello Readers
    I have installed ORACLE Database Engine on windows XP.
    I am facing problem in database creation.
    I have tryed wizard as well as mannual method.
    in wizard at 90% it gives an error "END-OF-FILE ON COMMUNICATION CHANNEL"
    although CD drive is in CR Rom drive.
    Please help me ....
    Rashid Masood Ashraf
    email: [email protected]

    After going to the properties as you suggested:
    Right now the Obtain an IP address automatically is checked
    I need to check the Use the following IP address:
    What should I enter for
    IP address:
    Subnet mask:
    Default gateway:
    Please help.
    Edited by: Nel Marcus on Dec 2, 2008 3:49 PM

  • Remittance challan creation problem in Cross company code scenario.

    I made to Advance  payment to Cross comany code while posting time we deduct the TDS amount. at the time remiottance challan creation  i got bellow error.please help to me
    No unpaid tax lines exist for the given selection criteria.
    Message no. 8I702
    Diagnosis
    The corresponding withholding tax line  &1& is not present in WITH_ITEM table.
    System Response
    For withholding tax recovered from the vendor, tax line is present in table BSIS, but the corresponding entry is missing in table WITH_ITEM , which is necessary for challan updation. Check the entries.
    Procedure
    check entries in table WITH_ITEM for the open tax items chosen for clearing.
    Edited by: TEEGALA SATISH on Jun 15, 2010 5:43 PM

    Hi ...Actually i am getting same problem in challan creation.. Did you get any solution? ...

  • Process order creation problem

    I am creating a process order . while creating the process order system is throwing error as Auto batch numbering not set up for material type XXXX in plant xxxx . System is not allowing to create process order.
    How to resolve the issue.

    Check in CORW   and switch off batch creation  to isolate the problem if needed.   Also see whether you have any batch managed compoents   for which you have set master data for automatic batch determination.
    There is no setting for material types and plant for batch.  May be the error message is not correct.  What is the message number?

  • New Excise Invoice Creation - Problem

    Hi All,
    I am facing problem in creation of Excise Invoice. I dont know the step by step process, can any one help me to get this ?
    Best Regards
    Chintesh Soni

    Hi Chintesh,
    if you are referring to a 100% tax invoice, please see note [887917|https://service.sap.com/sap/support/notes/887917] for 2 workarounds.
    All the best,
    Kerstin

  • Logical Standby Creation Problems "Create STatus Unknown"

    We are using Grid Control 10.2.0.3 on Oracle 10.2..0.3 databases, Windows 2003 sp2 machines.
    We have a primary and fsfo physical standby working properly. We went to create a logical standby on a separate server and had problems. it turned out that the listener on the server was not discovered on the target. we fixed that problem and finished the creation of the logical standby and it seems to be functioning properly now.
    However, the status on the data guard page for the logical standby shows Create Status Unknown and we are unable to proceed on to create an additional standby database for other purposes. We have tried removing the logical standby target and re-discovering it but status still shows Create Status Unknown.
    Is there any way to clear this status without having to drop the logical standby and completely recreate it.

    The status on the agent shows it uploading successfully.
    We resolved the issue by removing the logical standby from the configuration and selecting to retain the log apply. We then re-added the standby selecting to manage an existing standby database and the status went to normal.

  • Date of creation problem after making Bridge create TIFs with PS Image Processor.

    I went through about 100 files and did what I wanted in Camera RAW (started out with NEFs). Thereafter I had Bridge make TIFs of all the files in the folder using the PS Image Processor.
    Now, looking at the TIF folder in Bridge, I find that about 80 of them has a wrong creation date (I checked the Preferences and I have asked for the date of creation - and not the date of modification). As I watch (several minutes) Bridge changes some of the dates to the ones known to me to be correct - but not all. If I close Bridge, restart Bridge and open the folder again the same happens, i.e. it starts all over. Even stranger is that the Windows Pathfinder have problems with the same files - i.e. does not show the date of creation at all.
    The date of these 80 files are all the same - and they have been edited over several days!
    I have done a lot of work on those files and would hate to start all over. I checked and of course the dates are the original in the RAW folder that I started out with.
    I am sure that I do not have any bugs in my PC.
    Can I remedy this i any way? I hate to have wrong dates on my files.
    What can I have done to cause this? Can anyone help me? Thanking You in advance.
    Git

    Thank You again.
    Yes, I have also purged the cache. More than once and again this morning. It does not change the result. Bridge still goes from the date January 13th 2013 to the correct date of creation when I look at the separate thumbnails - and back again when I scroll down the page and subsequently scroll up again!
    Thus it seems that Bridge knows the date that the camera set but somehow this foolish January date keeps coming back.
    All the folders with NEFs have correct dates - also the ones where I have imported the files from camera to pc in 2013! Only my TIFFs have this problem.
    Rigth now I am having Norton 360 run a complete system scan. When that is done I will inactivate Norton (I know that I cannot have 2 antivirus softwares and firewalls run simultaneously on the PC) and instead run a complete scan with Lavasoft Adaware Pro. I have also used Norton Utilities and found no problems. Plus Malwarebytes Antimalware and Superantispyware. No problems.
    If "date file created" is the date that the file was modified, it would not help me much. I often go back to a file and play with it again and if I have understood correctly, then this date would change again. I want to see the date of capture ("date created"). Also, by now the problem afflicts at least 1000 files. It would be impossible to do single file manual corrections of dates on so many files.
    Update: While I have been writing this, an opened folder with only 36 files has changed the dates to the correct date of capture. I could also scroll up and down without the dates changíng in this small folder. But when I closed Bridge, restarted it and looked at the folder again Bridge again did need minutes to get all the dates rigth. Maybe I have a pc capacity problem instead of a software problem? But why on earth would this specific wrong date result from a too weak pc.?
    Maybe I should just give up and reinstall PS?
    Sorry if I trouble You with a ridiculous problem.
    Git

  • Select List (based on LOV) query problem

    Hello experts! I have a small problem here, which I can't seem to overcome.
    I have a page item (select list based on LOV), which is based on a query. The query returns all potential employees of a department that are responsible for a certain duty. So far so good!
    The problem is that there are two departments, which should not only see there own employees but also the name of the employee that has carried out a certain task. However, due to my query, the name of that person is not displayed - only the pk is returned.
    Do you have a recommendation how I display all employees of a specific department and have additional values translated as well?
    My query is as follows: select str_bearbeiter, cnt_bearbeiter from vt_tbl_bearbeiter where cnt_bearbeiter in (SELECT
    CNT_REGIERUNGSBEZIRK FROM TBL_REGIERUNGSBEZIRK) union select str_bearbeiter, cnt_bearbeiter from vt_tbl_bearbeiter where int_behoerde in (SELECT
    CNT_REGIERUNGSBEZIRK FROM TBL_REGIERUNGSBEZIRK where STR_REGIERUNGSBEZIRK = lower (:app_user)) whereas :app_user holds the information of the department.
    Any hint is appreciated!
    Many thanks,
    Seb

    Okay, I just had the right idea and it's working well! Sorry for posting!
    I return the name of the employee that has edited a dataset and simply add all others of the logged on department! Really easy! Should have thought of that before posting! ;-(
    The correct code is select str_bearbeiter, cnt_bearbeiter from vt_tbl_bearbeiter a, vt_tbl_punktdaten b where a.cnt_bearbeiter = b.int_bearbeiter and
    inv_pt_id_sub = :P4_PTIDS
    union select str_bearbeiter, cnt_bearbeiter from vt_tbl_bearbeiter where int_behoerde in (SELECT
    CNT_REGIERUNGSBEZIRK FROM TBL_REGIERUNGSBEZIRK where STR_REGIERUNGSBEZIRK = lower (:app_user))Bye,
    Seb

  • RAID creation problem,

    When I try to create RAID sets, both mirrored in Disk Utility, one set works perfectly while attempting to create the other set results in the error message: "Creating RAID set failed with error: Could not find RAID"
    I've tested the disks with Techtool Pro and there were no errors found on either. I have also used the repair/rebuild function in DiskWarrior on both of them as well as the repair permissions and repair disk functions in Disk Utility.
    After the above error the 2 drives involved do not show up in the finder, but are still present in Disk Utility as unmounted disks. In order to act on them again they have to be initialized/formatted. After doing this they show up again, but any attempt to create a mirrored set with them continues to fail.
    Is there anything that can be done to fix this? Or are these bad drives that are good at hiding it (no bad sectors, all tests+).

    From your comments, I guess you are using 4 disks and trying to create two Mirrored RAIDs at the same time? If that's the case, you might try just creating 1 set using two disks, then quit Disk Utility. Launch Disk Utility, then try creating the second set using the other two disks. If that doesn't work, you might try rebooting in between RAID creations. If that doesn't work, you might try the work around seen here;
    http://forums.creativecow.net/archivethread/71/480772
    "2) Create 1st stripe from two devices, don't do anything to the other two yet. Shutdown Mac and turn off the power to the devices of the 1st stripe, leave the power on to the two un-stripe devices. Boot and stripe the 2nd pair, shutdown Mac. Turn power back on to the 1st stripe and reboot (all power to devices are on). Both stripes should be on the desk top."
    Personally, that 'solution' seems a little dated. Disk Utility has been able to create more than one set for a while now. I just tried it. I have a Stripped BootRAID using two disks, I used a partition from another internal disk and an old external disk, connected by USB2, to create another Stripped RAID. Then I broke the new Stripped set and created a Mirrored set. I didn't have any problems creating either. I had two stripped RAIDs on a FW800 G4 running Tiger a couple years ago...
    Good Luck

  • Generic Data Source Creation Problem....URGENT!!!!!!!!!!!!!

    Hello BW Experts...!
    I need to create a Generic Data Source out of a table called VBSEGK... I was trying in the usual way with RSO2 , but when I press save button after entering the Table name the following error is coming:
    " Invalid Extract Stucture Template VBSEGK of Data Source ZPARK_01"
    and when I click on the error message its showing
    Diagnosis
    You tried to generate an extract structure with the template structure VBSEGK. This operation failed, because the template structure quantity fields or currency fields, for example, field DMBTR refer to a different table.
    Procedure
    Use the template structure to create a view or DDIC structure that does not contain the inadmissable fields.
      VBSEGK is a standard table , so I cant change the Table structure. Can any one give me some idea of how to create Data source with this table ASAP ASAP please....
         Please ask me questions if you didnt get this...
    thanks

    Hi Harish,
    Please check OSS note 335342.
    Symptom
    The creation of a generic DataSource which is extracted from a DDIC view or a transparent table terminates with error message R8359:Invalid extract structure template &2 of Datasource &1
    Other terms
    OLTP, DataSource, extractor, data extraction, generic extractor
    Reason and Prerequisites
    The problem is caused by a program error.
    Solution
    The table or view used for extraction must have currency and unit fields within the field list of the table/view for all currency and quantity key figures.Otherwise the consistency of the extracted data cannot be ensured.To make the generation possible, check whether all key figures of your table refer to unit fields that are within the field list.If this is not the case, there are two possibilities:
    1. A table is used for extraction.
    Create a view in which you have a currency / unit field contained in the view for each key figure. The currency / unit field from the table must be included in the view to which the key figure actually refers.
    Example:
    Field WKGBTR of table COEP refers to the unit field WAERS of table TKA01. In a view that contains field COEP-WKGBTR, table TKA01 and field WAERS must be included in the field list.
    If the currency or the unit a key figure refers to is not located in a table but in a structure, the key figure has to be removed from the view and read via a customer exit (see below). Structures cannot be included in a view.
    ATTENTION!! Often the key of the table in which the referenced unit is located, does not agree with the key in the table with the corresponding key figure. In this case, the join condition of both tables is not unique in the view definition, that means for each key line of the table with the key figure, several lines of the table with the unit may be read: the result is a multiplying of the number of lines in the view by a factor corresponding to the number of lines that fit the key figure, in the unit table. To be able to deliver consistent data to a BW system, check whether the unit of the key figure in question should always have a fixed value. If yes then you can determine that in the view definition via 'GoTo -> Selection conditions'. If no, then you must proceed as follows:
    a) Remove the key figure from the view
    b) Define the DataSource
    c) Enhance the extract structure by key figure and unit for each append (Transaction RSA6)
    d) Add the key figure and the unit in a customer exit
    2. A view is already used for the extraction
    If it is not possible to obtain a unit of measure from a table on which the view is based, the unit field must be deleted from the field list.
    A note in relation to the upward compatibility of BW-BCT InfoSources: BW-BCT 1.2B was not yet able to check units and currency fields. For this reason, it is possible that InfoSources which were defined in the source system as of BW-BCT 1.2B must be redefined as of 2000.1 in the manner described above. However, checks are absolutely essential for the consistency of the extracted data.
    Regards,
    Praveen

  • Database creation problems and parition consolidation

    This is my first time on here so please look past my green-ness. =)
    I Installed Oracle 8.1.7 successfully and opted to setup the database creation later so I could put the database on another partition. Well, I tried once to create the database on the partition I installed oracle onto (/oracle) but I had insufficient space, It said I needed 1.3 gigs. My "/database" partition is only a 1 gig parition (and /oracle is 2 gigs). My plan was of course, to seperate them.
    Ok, I didn't see an option to create the db on another partition so I resigned to using /oracle. My question now is, is the 960 megs on /oracle enough to create the database? I used the custom setup so I could see how everything was broken down and I noticed that my rollback segment is 500 megs. Is this necessary? Can I get by with 100 megs and be fine? Also, if I did do this, I'd have approximately 100 megs free for data, this is enough I hope??
    Lastly, I need to combine the partitions somehow without effecting /. The hard drive is broken down into:
    gigs
    5.5 /
    2.0 /oracle
    1.0 /database
    Does anyone have any ideas? In windows, I used partiton magic but I don't know if there is a linux version.
    One more thought (last one for real =), it had 50 megs allocated for it's shared memory pool. I'm assuming this will take away from the 128megs I have thus effectively making only 78 available to X, will this cause any noticeable problems other than a bit of slowdown? I have most if not all of my virtual memory free at all times. I know I should get more ram, but you know how the budget thing goes...
    Any help would be greatly appreciated,
    Rob

    Try running the catalog and catproc sql scripts.
    $ cd $ORACLE_HOME/rdbms/lib
    $ svrmgrl
    connect internal
    @catalog
    @catproc
    That will set up all of the internal views and packages that
    oracle needs to do DDL.
    Scott
    Don DeLuca (guest) wrote:
    : I'm gotten through the install and created the database. The
    : database comes up, I can do some simple stuff like create a new
    : user, create a simple table but when I try to run SQL DDL
    : statements I get errors saying: "recursive SQL error". I
    : looked at the sql.log file after the install. It doesn't look
    : like all the packages were created successfully. Have others
    had
    : similiar problems with the system table/package creation? To
    me
    : it looks like a DDL that was needed was not called during the
    : install and later caused problems during package creation.
    : This is my fourth time installing and I have the same problem
    : each time. I also tried to run the plsql.sql file to recreate
    : some of the packages but this did not work either.
    : I've just about given up and I'm starting to realize why its
    : free.
    null

  • LOV Query problem

    I have a very huge and a lenghty LOV Query . I am using 6i builder and when i save the module and try to reopen it , Only 3/4th of the LOV query is available and the remaining disappears . I can compile this in 6i by copying the query again but it creates a problem in 10g . ie. The Fmx file is not generating in 10g.
    Firstly ,
    I haven no idea as to why the code disappears for no reason ? IS it because its too lenghty. I am sure there are no errors because it wouldn't have compiled if there were any .
    Anybody who has come across such problems , kindly advice!
    Thank you .

    hii,
    Create a VIEW of your query in database level and then create LOV based on that VIEW.
    Regards
    PS

  • SUNW.gds resource creation problem / service startup failure

    We have a vendor provided set of application server start and stop scripts that we are trying to mold into a GDS resource. The scripts must be run as a user not as root. This app takes about 6 minutes to start up and it creates about 400 processes in doing so. We gave the user cluster.modify auth so it can create the resource, and the command takes but when the resource is being created, the startup script does not seem to fire. So I'm a little stumped as to make this thing run under the user account project settings. The other thing I noticed is the ports that I specify in the list are never present until the end of the 6 minutes of startup junk, and the GDS seems to log problems about these ports at the onset of creation. Here is the command for the create:
    clrs create -g ss_wfm-rg -t SUNW.gds  \
    -p Start_Command="/home/appl/advxrt/R8.0g/TOP/SOLARIS/bin/StartServer" \
    -p Stop_Command="/home/appl/advxrt/R8.0g/TOP/SOLARIS/bin/ShutdownServer" \
    -p Resource_dependencies=advxprd,ss_hasp-rs \
    -p Port_list=22100/tcp,23100/tcp,27100/tcp,4100/tcp,17100/tcp \
    -p Start_timeout=420 \
    -p Stop_timeout=120 \
    -p Child_mon_level=-1 \
    ss_server_gds-rsAnd from the messages log...
    Aug 18 07:31:04 prddsm01nd01 SC[SUNW.gds:6,ss_wfm-rg,ss_server_gds-rs,gds_svc_start]: [ID 141062 daemon.error] Failed to connect to host advxprd and port 221
    00: Connection refused.
    Aug 18 07:31:04 prddsm01nd01 SC[SUNW.gds:6,ss_wfm-rg,ss_server_gds-rs,gds_svc_start]: [ID 805735 daemon.error] Failed to connect to the host <advxprd> and po
    rt <22100>.
    Aug 18 07:31:06 prddsm01nd01 Cluster.PMF.pmfd: [ID 887656 daemon.notice] Process: tag="ss_wfm-rg,ss_server_gds-rs,0.svc", cmd="/bin/sh -c /home/appl/advxrt/R
    8.0g/TOP/SOLARIS/bin/StartServer", Failed to stay up.
    Aug 18 07:31:06 prddsm01nd01 Cluster.PMF.pmfd: [ID 534408 daemon.notice] "ss_wfm-rg,ss_server_gds-rs,0.svc" restarting too often ... sleeping 30 seconds.
    Aug 18 07:31:06 prddsm01nd01 Cluster.RGM.rgmd: [ID 650825 daemon.error] Method <gds_svc_start> on resource <ss_server_gds-rs> terminated due to receipt of si
    gnal <9>Basically I would like to know if the ports list should include all ports that the app will eventually listen on, or some, or minimal? The other biggy is how to get this thing to launch as a user and if it fail over to another node does not start as root. I tried doing an su - acctname -c "script" but it won't start it. Thanks!

    Detlef or anyone familiar with GDS...
    We agree here that customizing the GDS using your template is the way to go. What I have noticed is that since creating the wrapper scripts to call the newtask command, setup the environment, etc, when the wrapper script finishes (kicking off the native script) the RGM sees this as a fault and immeidately tries to restart the agent. Here is a snippet of the messages log.
    Sep 24 13:48:03 devdsm01nd01 Cluster.PMF.pmfd: [ID 887656 daemon.notice] Process: tag="ss_stage-rg,ss_stage_server_gds-rs,0.svc", cmd="/bin/sh -c /home/appl/advxtst/tds/SS_kicker.sh", Failed to stay up.
    Sep 24 13:48:03 devdsm01nd01 Cluster.RGM.rgmd: [ID 784560 daemon.notice] resource ss_stage_server_gds-rs status on node devdsm01nd01 change to R_FM_FAULTED
    Sep 24 13:48:03 devdsm01nd01 Cluster.RGM.rgmd: [ID 922363 daemon.notice] resource ss_stage_server_gds-rs status msg on node devdsm01nd01 change to <Service daemon not running.>
    Sep 24 13:48:03 devdsm01nd01 SC[SUNW.gds:6,ss_stage-rg,ss_stage_server_gds-rs,gds_probe]: [ID 670283 daemon.notice] Issuing a resource restart request because the application exited.So we figured out that if we suspend the RG, then we can of course stop and start the app all day long using the clrs dis|enable commands but is using these wrapper scripts still the way to go even with the GDS template? Even assigning the user's project settings to the resources does not seem to allow the native vendor supplied startup scripts to launch (they say it is purely because we did not setup the working directory, etc, correctly, which the wrapper scripts do quite well). Is is normal for the wrapper script to "die" but yet have GDS probe the ports after X amount of time? Is this where the sleeping dummy comes into play?
    thanks in advance!

Maybe you are looking for

  • Intercompany Netting between Customer and Vendor

    Hi All, I have 2 queries regaring netting of customer and vendor open line items. 1. Is it possible to have netting between vendor of one company code with customer of another company code. Is yes, how do we do that. 2. It is very much possible to ne

  • Is it possible to config AIR-CAP1602E-A-K9 without controller

    I purchased a preowned AIR-CAP1602E-A-K9 to setup at my home. I did not realize it needed a controller, and need to set it up to work on it's own. Is it possible? any assistance would be greatly appreciated.

  • Photo books - many different sizes not possible?

    Hi. Does Elements 9 really offer only two sizes for photo books? I want to produce a photo book with multiple pages. But Elements 9 (German version) offers only two sizes: 297 mm x 210 mm (A4) 300 mm x 300 mm Above those two sizes i see the words "Pr

  • SELECT 시 TABLE 이름을 DYNAMIC 하게 이용하기 (PRO*C)

    제품 : PRECOMPILERS 작성날짜 : 1998-02-04 다음은 method 3 를 이용해 select 하는 table 명을 dynamic 하게 가져가는 program 이다. #include <stdio.h> #include <string.h> #define USERNAME "scott" #define PASSWORD "tiger" #include <sqlca.h> #include <oraca.h> EXEC ORACLE OPTION (O

  • Erasing the system files from a previous Windows 7 Home Premium Version.

    HP assistant says I have "a copy of an older version of Windows is using significant hard drive space...." It then suggests I can easily erase the "Windows.old" files with the disc cleanup utility. Not only does the cleanup utility not finde such fil