Questions regarding procedures

1. How do we develop a procedure to create our own print statement?
2. How do we write a procedure to accept for example deptno as input from an employee table and print all the details of employees ( name, number, job etc) with their grade from a salary table?
Thanks

user8706975 wrote:
1. How do we develop a procedure to create our own print statement?Print on what, where and how?
PL/SQL cannot "+print+" anything. It is a background server process. It is not connected to a mouse or keyboard to read user input. It is not connected to a computer screen or printer to provide user output.
2. How do we write a procedure to accept for example deptno as input from an employee table and print all the details of employees ( name, number, job etc) with their grade from a salary table?Agree with Tubby. Sounds like homework.
The correct way to do this is to create a procedure that accepts the data from the client software as parameters. The client program needs to interact with the user.
It then creates a reference cursor for the required rows and returns that to the client. The client program uses this reference cursor, fetches from it, and displays the data to the user.
Details on REF CURSORs in the [Oracle® Database PL/SQL User Guide and Reference|http://download.oracle.com/docs/cd/B19306_01/appdev.102/b14261/sqloperations.htm#sthref1392].
As for the client part - I assume you will be using SQL*Plus. You need to research how to use bind variables in SQL*Plus (the VAR command in SQL*Plus) - you need to use that to call the stored procedure and receive (as output variable) the reference cursor.
As for displaying the ref cursor in SQL*Plus - that's the easy part. SQL*Plus sports a PRINT command that does that for you.
All this information is readily available in the Oracle manual, in these forums and on the net. And hopefully part of this homework assignment is teaching you how to use the resources available (documentation and the Net) to research and solve a problem. That is as important as writing the right code at the end of the day.

Similar Messages

  • I have some questions regarding setting up a software RAID 0 on a Mac Pro

    I have some questions regarding setting up a software RAID 0 on a Mac pro (early 2009).
    These questions might seem stupid to many of you, but, as my last, in fact my one and only, computer before the Mac Pro was a IICX/4/80 running System 7.5, I am a complete novice regarding this particular matter.
    A few days ago I installed a WD3000HLFS VelociRaptor 300GB in bay 1, and moved the original 640GB HD to bay 2. I now have 2 bootable internal drives, and currently I am using the VR300 as my startup disk. Instead of cloning from the original drive, I have reinstalled the Mac OS, and all my applications & software onto the VR300. Everything is backed up onto a WD SE II 2TB external drive, using Time Machine. The original 640GB has an eDrive partition, which was created some time ago using TechTool Pro 5.
    The system will be used primarily for photo editing, digital imaging, and to produce colour prints up to A2 size. Some of the image files, from scanned imports of film negatives & transparencies, will be 40MB or larger. Next year I hope to buy a high resolution full frame digital SLR, which will also generate large files.
    Currently I am using Apple's bundled iPhoto, Aperture 2, Photoshop Elements 8, Silverfast Ai, ColorMunki Photo, EZcolor and other applications/software. I will also be using Photoshop CS5, when it becomes available, and I will probably change over to Lightroom 3, which is currently in Beta, because I have had problems with Aperture, which, until recent upgrades (HD, RAM & graphics card) to my system, would not even load images for print. All I had was a blank preview page, and a constant, frozen "loading" message - the symbol underneath remained static, instead of revolving!
    It is now possible to print images from within Aperture 2, but I am not happy with the colour fidelity, whereas it is possible to produce excellent, natural colour prints using its "minnow" sibling, iPhoto!
    My intention is to buy another 3 VR300s to form a 4 drive Raid 0 array for optimum performance, and to store the original 640GB drive as an emergency bootable back-up. I would have ordered the additional VR300s already, but for the fact that there appears to have been a run on them, and currently they are out of stock at all, but the more expensive, UK resellers.
    I should be most grateful to receive advice regarding the following questions:
    QUESTION 1:
    I have had a look at the RAID setting up facility in Disk Utility and it states: "To create a RAID set, drag disks or partitions into the list below".
    If I install another 3 VR300s, can I drag all 4 of them into the "list below" box, without any risk of losing everything I have already installed on the existing VR300?
    Or would I have to reinstall the OS, applications and software again?
    I mention this, because one of the applications, Personal accountz, has a label on its CD wallet stating that the Licence Key can only be used once, and I have already used it when I installed it on the existing VR300.
    QUESTION 2:
    I understand that the failure of just one drive will result in all the data in a Raid 0 array being lost.
    Does this mean that I would not be able to boot up from the 4 drive array in that scenario?
    Even so, it would be worth the risk to gain the optimum performance provide by Raid 0 over the other RAID setup options, and, in addition to the SE II, I will probably back up all my image files onto a portable drive as an additional precaution.
    QUESTION 3:
    Is it possible to create an eDrive partition, using TechTool Pro 5, on the VR300 in bay !?
    Or would this not be of any use anyway, in the event of a single drive failure?
    QUESTION 4:
    Would there be a significant increase in performance using a 4 x VR300 drive RAID 0 array, compared to only 2 or 3 drives?
    QUESTION 5:
    If I used a 3 x VR300 RAID 0 array, and installed either a cloned VR300 or the original 640GB HD in bay 4, and I left the Startup Disk in System Preferences unlocked, would the system boot up automatically from the 4th. drive in the event of a single drive failure in the 3 drive RAID 0 array which had been selected for startup?
    Apologies if these seem stupid questions, but I am trying to determine the best option without foregoing optimum performance.

    Well said.
    Steps to set up RAID
    Setting up a RAID array in Mac OS X is part of the installation process. This procedure assumes that you have already installed Mac OS 10.1 and the hard drive subsystem (two hard drives and a PCI controller card, for example) that RAID will be implemented on. Follow these steps:
    1. Open Disk Utility (/Applications/Utilities).
    2. When the disks appear in the pane on the left, select the disks you wish to be in the array and drag them to the disk panel.
    3. Choose Stripe or Mirror from the RAID Scheme pop-up menu.
    4. Name the RAID set.
    5. Choose a volume format. The size of the array will be automatically determined based on what you selected.
    6. Click Create.
    Recovering from a hard drive failure on a mirrored array
    1. Open Disk Utility in (/Applications/Utilities).
    2. Click the RAID tab. If an issue has occurred, a dialog box will appear that describes it.
    3. If an issue with the disk is indicated, click Rebuild.
    4. If Rebuild does not work, shut down the computer and replace the damaged hard disk.
    5. Repeat steps 1 and 2.
    6. Drag the icon of the new disk on top of that of the removed disk.
    7. Click Rebuild.
    http://support.apple.com/kb/HT2559
    Drive A + B = VOLUME ONE
    Drive C + D = VOLUME TWO
    What you put on those volumes is of course up to you and easy to do.
    A system really only needs to be backed up "as needed" like before you add or update or install anything.
    /Users can be backed up hourly, daily, weekly schedule
    Media files as needed.
    Things that hurt performance:
    Page outs
    Spotlight - disable this for boot drive and 'scratch'
    SCRATCH: Temporary space; erased between projects and steps.
    http://en.wikipedia.org/wiki/StandardRAIDlevels
    (normally I'd link to Wikipedia but I can't load right now)
    Disk drives are the slowest component, so tackling that has always made sense. Easy way to make a difference. More RAM only if it will be of value and used. Same with more/faster processors, or graphic card.
    To help understand and configure your 2009 Nehalem Mac Pro:
    http://arstechnica.com/apple/reviews/2009/04/266ghz-8-core-mac-pro-review.ars/1
    http://macperformanceguide.com/
    http://www.macgurus.com/guides/storageaccelguide.php
    http://www.macintouch.com/readerreports/harddrives/index.html
    http://macperformanceguide.com/OptimizingPhotoshop-Configuration.html
    http://kb2.adobe.com/cps/404/kb404440.html

  • Questions regarding creation of vendor in different purchase organisation

    Hi abap gurus .
    i have few questions regarding data transfers .
    1) while creating vendor , vendor is specific to company code and vendor can be present in different purchasing organisations within the same company code if the purchasing organisation is present at plant level .my client has vendor in different purchasing org. how the handle the above situatuion .
    2) i had few error records while uploading MM01 , how to download error records , i was using lsmw with predefined programmes .
    3) For few applications there are no predefined programmes , no i will have to chose either predefined BAPI or IDOCS . which is better to go with . i found that BAPI and IDOCS have same predefined structures , so what is the difference between both of them  .

    Hi,
    1. Create a BDC program with Pur orgn as a Parameter on the selection screen
        so run the same BDC program for different Put organisations so that the vendors
        are created in different Pur orgns.
    2. Check the Action Log in LSMW and see
    3.see the doc
    BAPI - BAPIs (Business Application Programming Interfaces) are the standard SAP interfaces. They play an important role in the technical integration and in the exchange of business data between SAP components, and between SAP and non-SAP components. BAPIs enable you to integrate these components and are therefore an important part of developing integration scenarios where multiple components are connected to each other, either on a local network or on the Internet.
    BAPIs allow integration at the business level, not the technical level. This provides for greater stability of the linkage and independence from the underlying communication technology.
    LSMW- No ABAP effort are required for the SAP data migration. However, effort are required to map the data into the structure according to the pre-determined format as specified by the pre-written ABAP upload program of the LSMW.
    The Legacy System Migration Workbench (LSMW) is a tool recommended by SAP that you can use to transfer data once only or periodically from legacy systems into an R/3 System.
    More and more medium-sized firms are implementing SAP solutions, and many of them have their legacy data in desktop programs. In this case, the data is exported in a format that can be read by PC spreadsheet systems. As a result, the data transfer is mere child's play: Simply enter the field names in the first line of the table, and the LSM Workbench's import routine automatically generates the input file for your conversion program.
    The LSM Workbench lets you check the data for migration against the current settings of your customizing. The check is performed after the data migration, but before the update in your database.
    So although it was designed for uploading of legacy data it is not restricted to this use.
    We use it for mass changes, i.e. uploading new/replacement data and it is great, but there are limits on its functionality, depending on the complexity of the transaction you are trying to replicate.
    The SAP transaction code is 'LSMW' for SAP version 4.6x.
    Check your procedure using this Links.
    BAPI with LSMW
    http://esnips.com/doc/ef04c89f-f3a2-473c-beee-6db5bb3dbb0e/LSMW-with-BAPI
    For document on using BAPI with LSMW, I suggest you to visit:
    http://www.****************/Tutorials/LSMW/BAPIinLSMW/BL1.htm
    http://esnips.com/doc/1cd73c19-4263-42a4-9d6f-ac5487b0ebcb/LSMW-with-Idocs.ppt
    http://esnips.com/doc/ef04c89f-f3a2-473c-beee-6db5bb3dbb0e/LSMW-with-BAPI.ppt
    <b>Reward points for useful Answers</b>
    Regards
    Anji

  • Question regarding the "mcxquery" and "dscl -mcxread" commands:

    Question regarding the mcxquery and dscl -mcxread commands:
    Does anyone know why the mcxquery and the dscl . -mcxread commands don't show any info about MCX managed login items & printers? The System Profiler's "Managed Client" section does. Id like to see info regarding managed printers and managed login items using the mcx tools. I have Mac users running 10.5.2 with both login items and printers that are pushed out to them via MCX. The System Profiler app shows all of their policies, but the dscl . -mcxread and mcxquery tools dont. Why not?
    -D
    Message was edited by: Daniel Stranathan
    null

    How do you "call procedures/functions" without sql code? You need at least the call statement like
    {call myProc(?,?,?)}that you wrap into a CallableStatement.
    Other than that: when you switch off autocommit, you need to call commit/rollback at the end. Usually, if you don't commit/rollback a non-autocommitted connection, the transaction get's committed/rollbacked when you close the connection - that depends on the JDBC driver. But it's never a good idea to ommit the commit/rollback calls on a non-autocommit connection. Usually you enclose your code in a try/catch block like this:
    con.setAutocommit(false);
    try {
       con.commit();
    } catch (Exception e) {
       con.rollback();
    } finally {
        con.setAutocommit(true); //or:
        con.close();
    }

  • Question regarding DocumentDB RU consumption when inserting documents & write performance

    Hi guys,
    I do have some questions regarding the DocumentDB Public Preview capacity and performance quotas:
    My use case is the following:
    I need to store about 200.000.000 documents per day with a maximum of about 5000 inserts per second. Each document has a size of about 200 Byte.
    According to to the documentation (http://azure.microsoft.com/en-us/documentation/articles/documentdb-manage/) i understand that i should be able to store about 500 documents per second with single inserts and about 1000 per second with a batch insert using
    a stored procedure. This would result in the need of at least 5 CUs just to handle the inserts.
    Since one CU consists of 2000 RUs i would expect the RU usage to be about 4 RUs per single document insert or 100 RUs for a single SP execution with 50 documents.
    When i look at the actual RU consumption i get values i don’t really understand:
    Batch insert of 50 documents: about 770 RUs
    Single insert: about 17 RUs
    Example document:
    {"id":"5ac00fa102634297ac7ae897207980ce","Type":0,"h":"13F40E809EF7E64A8B7A164E67657C1940464723","aid":4655,"pid":203506,"sf":202641580,"sfx":5662192,"t":"2014-10-22T02:10:34+02:00","qg":3}
    The consistency level is set to “Session”.
    I am using the SP from the example c# project for batch inserts and the following code snippet for single inserts:
    await client.CreateDocumentAsync(documentCollection.DocumentsLink, record);
    Is there any flaw in my assumption (ok…obviously) regarding the throughput calculation or could you give me some advice how to achieve the throughput stated in the documentation?
    With the current performance i would need to buy at least 40 CUs which wouldn’t be an option at all.
    I have another question regarding document retention:
    Since i would need to store a lot of data per day i also would need to delete as much data per day as i insert:
    The data is valid for at least 7 days (it actually should be 30 days, depending on my options with documentdb). 
    I guess there is nothing like a retention policy for documents (this document is valid for X day and will automatically be deleted after that period)?
    Since i guess deleting data on a single document basis is no option at all i would like to create a document collection per day and delete the collection after a specified retention period.
    Those historic collections would never change but would only receive queries. The only problem i see with creating collections per day is the missing throughput:
    As i understand the throughput is split equally according to the number of available collections which would result in “missing” throughput on the actual hot collection (hot meaning, the only collection i would actually insert documents).
    Is there any (better) way to handle this use case than buy enough CUs so that the actual hot collection would get the needed throughput?
    Example: 
    1 CU -> 2000 RUs
    7 collections -> 2000 / 7 = 286 RUs per collection (per CU)
    Needed throughput for hot collection (values from documentation): 20.000
    => 70 CUs (20.000 / 286)
    vs. 10 CUs when using one collection and batch inserts or 20 CUs when using one collection and single inserts.
    I know that DocumentDB is currently in preview and that it is not possible to handle this use case as is because of the limit of 10 GB per collection at the moment. I am just trying to do a POC to switch to DocumentDB when it is publicly available. 
    Could you give me any advice if this kind of use case can be handled or should be handled with documentdb? I currently use Table Storage for this case (currently with a maximum of about 2500 inserts per second) but would like to switch to documentdb since i
    had to optimize for writes per second with table storage and do have horrible query execution times with table storage because of full table scans.
    Once again my desired setup:
    200.000.000 inserts per day / Maximum of 5000 writes per second
    Collection 1.2 -> Hot Collection: All writes (max 5000 p/s) will go to this collection. Will also be queried.
    Collection 2.2 -> Historic data, will only be queried; no inserts
    Collection 3.2 -> Historic data, will only be queried; no inserts
    Collection 4.2 -> Historic data, will only be queried; no inserts
    Collection 5.2 -> Historic data, will only be queried; no inserts
    Collection 6.2 -> Historic data, will only be queried; no inserts
    Collection 7.2 -> Historic data, will only be queried; no inserts
    Collection 1.1 -> Old, so delete whole collection
    As a matter of fact the perfect setup would be to have only one (huge) collection with an automatic document retention…but i guess this won’t be an option at all?
    I hope you understand my problem and give me some advice if this is at all possible or will be possible in the future with documentdb.
    Best regards and thanks for your help

    Hi Aravind,
    first of all thanks for your reply regarding my questions.
    I sent you a mail a few days ago but since i did not receive a response i am not sure it got through.
    My main question regarding the actual usage of RUs when inserting documents is still my main concern since i can not insert nearly
    as many documents as expected per second and CU.
    According to to the documentation (http://azure.microsoft.com/en-us/documentation/articles/documentdb-manage/)
    i understand that i should be able to store about 500 documents per second with single inserts and about 1000 per second with a batch insert using a stored procedure (20 batches per second containing 50 documents each). 
    As described in my post the actual usage is multiple (actually 6-7) times higher than expected…even when running the C# examples
    provided at:
    https://code.msdn.microsoft.com/windowsazure/Azure-DocumentDB-NET-Code-6b3da8af/view/SourceCode
    I tried all ideas Steve posted (manual indexing & lazy indexing mode) but was not able to enhance RU consumption to a point
    that 500 inserts per second where nearly possible.
    Here again my findings regarding RU consumption for batch inserts:
    Automatic indexing on: 777
    RUs for 50 documents
    Automatic indexing off &
    mandatory path only: 655
    RUs for 50 documents
    Automatic indexing off & IndexingMode Lazy & mandatory path only:  645 RUs for
    50 documents
    Expected result: approximately 100
    RUs (2000 RUs => 20x Batch insert of 50 => 100 RUs per batch)
    Since DocumentDB is still Preview i understand that it is not yet capable to handle my use case regarding throughput, collection
    size, amount of collections and possible CUs and i am fine with that. 
    If i am able to (at least nearly) reach the stated performance of 500 inserts per second per CU i am totally fine for now. If not
    i have to move on and look for other options…which would also be “fine”. ;-)
    Is there actually any working example code that actually manages to do 500 single inserts per second with one CUs 2000 RUs or is
    this a totally theoretical value? Or is it just because of being Preview and the stated values are planned to work.
    Regarding your feedback:
    ...another thing to consider
    is if you can amortize the request rate over the average of 200 M requests/day = 2000 requests/second, then you'll need to provision 16 capacity units instead of 40 capacity units. You can do this by catching "RequestRateTooLargeExceptions" and retrying
    after the server specified retry interval…
    Sadly this is not possible for me because i have to query the data in near real time for my use case…so queuing is not
    an option.
    We don't support a way to distribute throughput differently across hot and cold
    collections. We are evaluating a few solutions to enable this scenario, so please do propose as a feature at http://feedback.azure.com/forums/263030-documentdb as this helps us prioritize
    feature work. Currently, the best way to achieve this is to create multiple collections for hot data, and shard across them, so that you get more proportionate throughput allocated to it. 
    I guess i could circumvent this by not clustering in “hot" and “cold" collections but “hot" and “cold"
    databases with one or multiple collections (if 10GB will remain the limit per collection) each if there was a way to (automatically?) scale the CUs via an API. Otherwise i would have to manually scale down the DBs holding historic data. I
    also added a feature requests as proposed by you.
    Sorry for the long post but i am planning the future architecture for one of our core systems and want to be sure if i am on
    the right track. 
    So if you would be able to answer just one question this would be:
    How to achieve the stated throughput of 500 single inserts per second with one CUs 2000 RUs in reality? ;-)
    Best regards and thanks again

  • Questions regarding currency conversion using SPRUNCONVERSION.

    Hi all,
    I have implemented currency conversion using SPRUNCONVERSION stored procedure. I have following questions regarding how does it work.
    1. Is it mandatory to create FxTrans Business Rules with SPRUNCONVERSION?
    2. How does it work internally? How does it look up rate from the rate application?
    Any help will be greatly appreciated. Any reference to any documents will help.
    Thanks&Regards,
    Diksha.
    (PS: How to document does not explain about how it works).

    Hi Diksha,
    1. Yes since it will base it's translation rules based on what is defined on the currency conversion business rules.
    2. The stored procedure SPRUNCONVERSION scans all records found in the selected region of data and translates them according to the RATETYPE property assigned to the ACCOUNT specified in each record, based on the following mechanism:
    Note:
    All ACCOUNTS with no RATETYPE (ratetype = blank) will be translated with a factor of 1
    All ACCOUNTS with the reserved RATETYPE = NOTRANS will not be translated
    All other ACCOUNTS will be translated according to the definitions contained in the table of parameters called clcFXTRANS.
    How the RATE table is selected
    While most customers require a single table of rates, there are situations when more than one set of rates is required. In this situation, the translation procedure uses the RateEntity dimension to select the correct table of rates to use.
    Whenever a destination currency is selected, the procedure searches for a RateEntity member flagged with this currency in the Currency property. For example, if translating into USD, the system uses the RateEntity member that has the Currency property set to USD.
    If there is no RateEntity flagged as the destination currency, the system will use the RateEntity with Currency = 'u2019 (blank).
    In addition to this, some exceptions by ENTITY can be applied. For example, some entities just entering in the consolidation perimeter may need to be converted at their own specific set of rates. These entities may have a corresponding RateEntity member in the RATE cube. All ENTITIES having a corresponding RateEntity member in the RATE cube will use that member as rate table. For example, if there is a RateEntity member named like the ENTITY USOps, the RateEntity member USOps will be used to translate the values of entity ENTITY USOps.
    The RateEntity member, when representing an ENTITY, may be any of the following:
    A valid base level or parent member ID from the ENTITY dimension of the main cube
    A list of members of the ENTITY dimension, as defined filtering the members using a value of the DIMLIST property (or any property whose name begins with "DIMLISTu201D) of such dimension.
    For more info, you could refer to the Administration Help found on the Admin console.
    Hope this helps,
    Marvin

  • Question Regarding MIDI and Sample Accuracy

    Hi,
    I have 2 questions regarding MIDI.
    1. MIDI is moved by ticks. In the arrange window however, you can move a region by samples. When doing this, you can move within values of the ticks (which you can see on your position box that pops up) Now, will this MIDI note actually be played back at that specific sample point, or will it round the event to the closest tick? (example, if I have a MIDI note directly on 1.1.1.1, and I move the REGION in the arrange... will that MIDI note now fall on the sample that I have moved the region to, or will it be rounded to the closest tick?)
    2. When making a midi template from an audio region, will the MIDI information land exactly on the sample of the transient, or will it be rounded to the closest tick?
    I've looked through the manual, and couldn't find any specific answer to these questions.
    Thanks!
    Message was edited by: Matthew Usnick

    Ok, I've done some experimenting, and here are my results.
    I believe those numbers ARE samples. I came to this conclusion by counting (for some reason it starts on 11) and cutting a region to be 33 samples long (so, minus 11, is 22 actual samples). I then went to the Audio Bin window, and chose to view region length as samples. And there it said it: 22 samples. So, you can in fact move MIDI regions by samples!
    Second, I wanted to see if the MIDI notes in the region itself would be quantized to the nearest tick. I cut a piece of audio, so it had a 1 sample attack (zoomed in asa far as I could in the sample editor, selected the smallest portion, and faded in, and made the start point, the region start position). I saved the region as a new audio file, and loaded it up in the exs sampler.
    I then made a MIDI region, with and triggered the sample on beat 1 (quantized, on the money). I then went into the arrange window, made a fixed cycle length, and bounced the audio. I then moved the MIDI region by one sample to the right. I did this 22 times (which is the number of samples in a tick, at 120, apparently). After bouncing all of these (cycle position remained fixed, only the MIDI region was moving) I imported all the audio into the arrange on new tracks, and YES!!! The sample start was cascaded by a sample each time!
    SO.
    Not only can you move MIDI regions by sample, but the positions are NOT quantized to Logics ticks!
    This is very good news, and glad I worked this out!
    (if anyone thinks this sounds wrong, please correct me, but I'm pretty sure I proved it, in my test)
    Message was edited by: Matthew Usnick

  • Question regarding homehub and Open reach router -...

    Hi all,
      I had infinity installed earlier this month and am happy with it so far. I do have a few questions regarding the service and hardware though.
      I run both my BT openreach router and BT Home hub from the same power socket. The problem is, if I turn the plug on so both the Homehub and Openreach Router start up at the same time, the home hub will never get an Internet connection from the router. To solve this I have to turn the BT home hub on first and leave it for a minute, then start the router up and it all works fine. I'm just curious if this is the norm or do I have some faulty hardware?
      Secondly, I appreciate the estimated speed BT quote isn't always accurate, I was quoted 49mbits down but received 38mbits down - Which I was happy with. Recently though it has dropped to 30. I am worried this might continue to drop over time. and as of present I am 20mbits down on the estimate . For the record 30mbits is actually fine and probably more than I would ever need. If I could boost it some how though I would be interested to hear from you.
    Thanks, .

    Just a clarification: the two boxes are the HomeHub (router, black) and the modem (white).  The HomeHub has its own power switch, the modem doesn't.
    There is something wrong if the HomeHub needs to be turned on before the modem.  As others have said, in general best to leave the modem on all the time.  You should be able to connect them up in any order, or together.  (For example, I recently tripped the mains cutout, and when I restored power the modem and HomeHub went on together and everything was ok).
    Check if the router can connect/disconnect from the broadband using the web interface.  Leaving the modem and HomeHub on all the time, go to http://192.168.1.254/ on a browser on a connected computer, and see whether the Connect/Disconnect button works.

  • Question regarding IWDTree and context Value Node naming

    Hi,
    I have a question regarding the IWDTree / IWDTreeNodeType components.
    I have a context looking like this:
    Context
      + ResponseNode
        + PersonNode (1..1)
          + PersonAddressNode                    (empty node, placeholder)
          | + AdresNode (0..n)
          + PersonChildNode                      (empty node, placeholder)
          | + PersonNode (0..n)
          |   + PersonAddressNode                (empty node, placeholder)
          |     + AddressNode (0..n)
          + PersonParentsNode                    (empty node, placeholder)
            + PersonNode (0..n)
              + PersonAddressNode                (empty node, placeholder)
                + AddressNode (0..n)
    The context represents a person, a person's address, and a person's children and parents with their respective addresses.
    As a result, on different branches, a PersonNode and AddressNode can appear.
    And for some strange reason, all PersonNodes and AddressNodes link to the same ResponseNode.PersonNode.PersonParentsNode.PersonNode and ResponseNode.PersonNode.PersonParentsNode.PersonNode.PersonAddressNode.AddressNode respectively, irregardless of their branch...
    Is it illegal to have multiple PersonNode and AddressNode node names, and should they be named uniquely?

    Generally, node names need to be unique inside the context, attributes in different nodes can have same names. I wonder if the context structure you described will result in code without compile errors.
    The WD Tree can only be used with recursive context nodes or with a hierarchy of non-singleton child nodes.
    Can you give an example how your tree should look like at runtime?

  • Question regarding roaming and data usage

    I am currently out of my main country of service, and as such I have a question regarding roaming and data usage.
    I am told that the airplane mode is sufficient from keeping the phone off from roaming, but does this apply to any background data usage for applications and such?
    If the phone is in airplane mode, are all use of the phone including wifi and application use through the wifi outside of all extra charges from roaming?

    Ann154 wrote:
    If you are getting charged to use the wifi, then it is possible.  Otherwise no
    Just to elaborate here, Ann154 is referring to access charges for wifi, which is nothing to do with Verizon, so if you are using it in a plane, hotel, an internet cafe etc that charges for Wifi rather than being free .   Verizon does not charge you (or indeed know about!) wifi usage, or any other usage that is not on their cellular network (such as using a foreign SIM for example in global phones)  So these charges, if any, will not show up on the verizon bill app.  Having it in airplane mode prevents all cellular data traffic so you should be fine

  • Question regarding MM and FI integration

    Hi Experts
    I have a question regarding MM and FI integration
    Is the transaction Key in OMJJ is same as OBYC transaction key?
    If yes, then why canu2019t I see transaction Key BSX in Movement type 101?
    Thanks

    No, they are not the same.  The movement type transaction (OMJJ) links the account key and account modifier to a specific movement types.  Transaction code (OBYC) contains the account assignments for all material document postings, whether they are movement type dependent or not.  Account key BSX is not movement type dependent.  Instead, BSX is dependent on the valuation class of the material, so it won't show in OMJJ.
    thanks,

  • **question regarding 3G and wif**.

    I have a question regarding 3G and wifi. I have #G activated as well as wifi, when I go to retrieve mail for example I get a pop up asking me if I want to connect to a wifi network…should I have wifi and 3G activated at the same time, and why am I getting the pop up…
    Thanks

    You can have them on at the same time, but they will not be used at the same time for data. The order of preference for data is WiFi > 3G > EDGE > GPRS. You're getting the pop up, most likely, because you have Settings > Wi-Fi > Ask to Join Networks set to ON. You can set that to OFF, and the iPhone will still join known (i.e. previously used) WiFi networks automatically.

  • Question regarding Dashboard and column prompt

    My question regarding Dashboard and column prompt:
    1) Dashboard prompt usually work with only for columns which are in subject area. In my report I've created some of the columns which are based on other columns. Like I've daysNumber column that is based on two other columns, as it calculates the difference of two dates. When I create dashboard prompt I can't find this column there. I need to make a prompt on this column.
    2)For one of the column I've only two values 1 and 0. When I create prompt for this column, is it possible that in drop down list It shows 'Yes' for 1 and 'No' for 0 and still filter the request??

    Hi Toony,...
    I think there was another way of doing this...
    In the dashboard prompt go to Show option > select SQL Results from dropdown.
    There you need to write your Logical SQL like...
    SELECT CASE WHEN 1=0 THEN PERIODS.YEAR ELSE difference of date functionality END FROM SubjectAreaName
    Here.. Periods.Year is the column which is already exists in repository's presentation layer..
    and difference of date functionality is the code or formula of column which you want to show in drop-down...
    Also write the CASE WHEN 1=0 THEN PERIODS.YEAR ELSE difference of date functionality END code in fx of that prompt.
    I think it helps you in doing this..
    Just check and inform me if it works...
    Thanks & Regards
    Kishore Guggilla
    Edited by: Kishore Guggilla on Oct 31, 2008 9:35 AM

  • Questions regarding customisation/configuration of PS CS4

    Hello
    I have accumulated a list of questions regarding customising certain things in Photoshop. I don't know if these things are doable and if so, how.
    Can I make it so that the list of blending options for a layer is by default collapsed when you first apply any options?
    Can I make it possible to move the canvas even though I'm not zoomed in enough to only have parts of it visible on my screen?
    Is it possible to enable a canvas rotate shortcut, similar to the way you can Alt+RightClick to quickly change brush size?
    Is it possible to lock button positions? Sometimes I accidentally drag them around when I meant to click.
    Is it possible to lock panel sizes? For example, if I have the Navigator and the Layers panels vertically in the same group, can I lock the height of the navigator so that I don't have to re-adjust it all the time? Many panels have a minimum height so I guess what I am asking for is if it's possible to set a maximum height as well.
    Is it possible to disable Photoshop from automatically appending "copy" at the end of layer/folder names when I duplicate them?
    These are things I'd really like to change to my liking as they are problems I run into on a daily basis.
    I hope someone can provide some nice solutions

    NyanPrime wrote:
    <answered above>
    Can I make it possible to move the canvas even though I'm not zoomed in enough to only have parts of it visible on my screen?
    Is it possible to enable a canvas rotate shortcut, similar to the way you can Alt+RightClick to quickly change brush size?
    Is it possible to lock button positions? Sometimes I accidentally drag them around when I meant to click.
    Is it possible to lock panel sizes? For example, if I have the Navigator and the Layers panels vertically in the same group, can I lock the height of the navigator so that I don't have to re-adjust it all the time? Many panels have a minimum height so I guess what I am asking for is if it's possible to set a maximum height as well.
    Is it possible to disable Photoshop from automatically appending "copy" at the end of layer/folder names when I duplicate them?
    These are things I'd really like to change to my liking as they are problems I run into on a daily basis.
    I hope someone can provide some nice solutions
    2.  No.  It's a sore spot that got some forum time when Photoshop CS4 was first released, then again with CS5.  It's said that the rules change slightly when using full-screen mode, though I personally haven't tried it.
    3.  Not sure, since I haven't tried it.  However, you may want to explore the Edit - Keyboard Shortcuts... menu, if you haven't already.
    4.  What buttons are you talking about?  Those you are creating in your document?  If so, choose the layer you want to lock in the LAYERS panel, then look at the little buttons just above the listing of the layers:
    5.  There are many, many options for positioning and sizing panels.  Most start with making a panel visible, then dragging it somewhere by its little tab.  One of the important features is that you can save your preferred layout as a named workspace.  Choose the Window - Workspace - New Workspace... to create a new named workspace (or to update one you've already created).  The name of that menu is a little confusing.  Once you have created your workspace, if something gets out of place, choose Window - Workspace - Reset YourNamedWorkspace to bring it back to what was saved.
    You'll find that panels like to "stick together", which helps with arranging them outside of the Photoshop main window.
    As an example, I use two monitors, and this is my preferred layout:
    6.  No, it's not possible to affect the layer names Photoshop generates, as far as I know.  I have gotten in the habit of immediately naming them per their usage, so that I don't confuse myself (something that's getting easier and easier to do...).
    Hope this helps!
    -Noel

  • Question regarding ONT connection via Ethernet and Cable cards

    Hi,
    We recently upgraded to Fios Quation 150Mbps/65  plan. We are not getting the advertised speeds (we only get like 5mbps upload) so verizon is sending a tech to switch the ONT connection from coax to ethernet.
    I have 2 questions regarding this new setup:
    1. If the ONT communicates with the Fios Actiontect router via ethernet instead of coax from now on, How will the set top boxes and Tivo-esque cable card powered device I currently have connected to coax, talk to the verizon system from now on, if coax is taken out of the equation? Will fios signal still flow through the internal coax wiring of the house? And moreover, I was under the impression that coax was the way set top boxes communicated and derived independent ip addresses from the Fios router, for on deman purposes and what not. How will this work from now on?
    Quiestion 2.
    Right next to the wall where the ONT sits, theres's a basement office where we have a PC that connects to the Fios system  via an Actiontect MoCa adapter (ECB2500C) which I assume derives it internet connection from the Fios Actiontec router which sits upstairs in the living room. 
    Again, with the Coax about to be disabled next Friday in favor of ethernet connection from the ONT, I assume this PC will be left without internet because of the lack of internet signal in the coax? Is this correct? 
    Question 2.5 If my above assumption is correct, since this office is right on the other side of the wall where the ONT sits outside the house, would it be possible to run an ethernet wire through the wall that connects straight from the ONT to an ethernet switch inside the office, from which I would derive a connection for this basement PC (properly firewalled of course) and then, from said switch, continue running the ethernet wire that would ultimately reach the Actiontec Fios router upstairs from which the rest of the house derives it's internet?  and would this setup affect in any way the propper functioning of the cable boxes in the house?
    I'd appreciate your input and any help you can provide so I can have a ballpark idea of what to tell the Fios guy to do when he comes on Friday.
    Cheers.
    Solved!
    Go to Solution.

    It's not valid to have two devices connected to the ONT, PC and VZ Router.  Must be a single device. The ONT locks onto the MAC Address of the first device it sees. Since you have TV you should have the VZ router as the internet facing router.
    Other options:
    1.  Have the VZ Router located next to the PC in the basement and then use Wireless for all other PC's.
    2.  Have the VZ Router located next to the PC in the basement but run one wire upstairs and connect a switch where other PCs and devices can connect via a wire.
    Hope that helps.

Maybe you are looking for

  • Two cRIOs with the same code in 1 project

    I have a project that contains two cRIOs that run the same exact program.  The only difference is the IP address in the ni-rt.ini file.  I would like to create a project that represents the actual targets but uses the same vis.  In doing this I found

  • Planning on installing HP Officejet Pro 8000 Wireless Printer

        I have Cisco Network Magic Version 5.5.9195.0-pure0  (base installed on Windows XP (SP2) - ethernet cable connected to a new  Linksys router E2000.      Cable connection (via Comcast) to the internet     Will this HP printer be a good choice (has

  • Problem - Column title not appearing in excel sheet

    Hi Gurus, I am working on a report where I need to add two fields in the output ( Excel format). I have included the fields in the output structure.While debugging my final internal table has both the new headers and their value.But both the heading

  • Program ID in RFC Adapter

    Hi, could you please explain me the about program id in configuring sender RFC communication channel. Regards, Nagarjuna.

  • Incorrect value for global variable

    Hi, I am facing an issue in a query. In the query there is a Global variable 'Volume type'. In the default value tab of the variable it is given Default value as Litre. In the table RSZGLOBV also i have checked the global variable and there also the