Questions regarding features of BPM

Hi,
I am currently writing my diploma thesis and evaluating different BPM Solutions. One of those is SAP NetWeaver BPM.
Although i studied many pdfs and tried to install the trial version, some questions remained. Maybe you can help? I donu2019t need step-to-step tutorials; I just have to know if those features are available at all.
Development Studio
1. How are KPIs defined in Process Composer? Is there any process expert friendly way?
2. Is there any form of "Complex Event Processing"? E.g. Sub-Events, Notifications, manipulation of rules?
3. Is there any form of xslt data conversion? Maybe visual support?
Regarding Process Server and Process Desk
4. Can you Start/Stop/Hold processes?
5. Can you change context data of running processes?
6. How are long running processes stored?
7. Which Component is featured for Business Activity Monitoring (Bam)?
Sincerely Thomas

Hi Thomas,
Regarding your questions, we are currently implementing SAP netweaver BPM. Belo are some answers as per my knowledge:
1. How are KPIs defined in Process Composer? Is there any process expert friendly way?
- Very elementary level in current version, you have process list viewer available inportal for KPI. No custom reports supported as of now(Although can be achieved by some complex custom coding)
2. Is there any form of "Complex Event Processing"? E.g. Sub-Events, Notifications, manipulation of rules?
- Yes - eg. Notifications, BRM
3. Is there any form of xslt data conversion? Maybe visual support?
- Not sure but I think its not supported
Regarding Process Server and Process Desk
4. Can you Start/Stop/Hold processes?
_Yes you can strat /stop/suspend process
5. Can you change context data of running processes?
6. How are long running processes stored?
- Instance of processes reside on server
7. Which Component is featured for Business Activity Monitoring (Bam)?
- Process Monitoring can be used here.
Hope this helps!!
Cheers,
Arafat

Similar Messages

  • Questions regarding customisation/configuration of PS CS4

    Hello
    I have accumulated a list of questions regarding customising certain things in Photoshop. I don't know if these things are doable and if so, how.
    Can I make it so that the list of blending options for a layer is by default collapsed when you first apply any options?
    Can I make it possible to move the canvas even though I'm not zoomed in enough to only have parts of it visible on my screen?
    Is it possible to enable a canvas rotate shortcut, similar to the way you can Alt+RightClick to quickly change brush size?
    Is it possible to lock button positions? Sometimes I accidentally drag them around when I meant to click.
    Is it possible to lock panel sizes? For example, if I have the Navigator and the Layers panels vertically in the same group, can I lock the height of the navigator so that I don't have to re-adjust it all the time? Many panels have a minimum height so I guess what I am asking for is if it's possible to set a maximum height as well.
    Is it possible to disable Photoshop from automatically appending "copy" at the end of layer/folder names when I duplicate them?
    These are things I'd really like to change to my liking as they are problems I run into on a daily basis.
    I hope someone can provide some nice solutions

    NyanPrime wrote:
    <answered above>
    Can I make it possible to move the canvas even though I'm not zoomed in enough to only have parts of it visible on my screen?
    Is it possible to enable a canvas rotate shortcut, similar to the way you can Alt+RightClick to quickly change brush size?
    Is it possible to lock button positions? Sometimes I accidentally drag them around when I meant to click.
    Is it possible to lock panel sizes? For example, if I have the Navigator and the Layers panels vertically in the same group, can I lock the height of the navigator so that I don't have to re-adjust it all the time? Many panels have a minimum height so I guess what I am asking for is if it's possible to set a maximum height as well.
    Is it possible to disable Photoshop from automatically appending "copy" at the end of layer/folder names when I duplicate them?
    These are things I'd really like to change to my liking as they are problems I run into on a daily basis.
    I hope someone can provide some nice solutions
    2.  No.  It's a sore spot that got some forum time when Photoshop CS4 was first released, then again with CS5.  It's said that the rules change slightly when using full-screen mode, though I personally haven't tried it.
    3.  Not sure, since I haven't tried it.  However, you may want to explore the Edit - Keyboard Shortcuts... menu, if you haven't already.
    4.  What buttons are you talking about?  Those you are creating in your document?  If so, choose the layer you want to lock in the LAYERS panel, then look at the little buttons just above the listing of the layers:
    5.  There are many, many options for positioning and sizing panels.  Most start with making a panel visible, then dragging it somewhere by its little tab.  One of the important features is that you can save your preferred layout as a named workspace.  Choose the Window - Workspace - New Workspace... to create a new named workspace (or to update one you've already created).  The name of that menu is a little confusing.  Once you have created your workspace, if something gets out of place, choose Window - Workspace - Reset YourNamedWorkspace to bring it back to what was saved.
    You'll find that panels like to "stick together", which helps with arranging them outside of the Photoshop main window.
    As an example, I use two monitors, and this is my preferred layout:
    6.  No, it's not possible to affect the layer names Photoshop generates, as far as I know.  I have gotten in the habit of immediately naming them per their usage, so that I don't confuse myself (something that's getting easier and easier to do...).
    Hope this helps!
    -Noel

  • Hello, I have a question regarding the sharing/exporting on imovie. Whenever I click the share button all the normal options pop up, but when I actually click where I want to share it to nothing happens.  If you know what's wrong please let me know.

    Hello,  I have a question regarding the sharing on iMovie.  I have just recently purchased an Elgato Gaming Capture HD and I then finished my recording with that and put it into imovie.  I worked long and hard on the project and when I go click the share feature on iMovie all the noral options pop up and when I actually click where I want to share it to nothing at all happens.  If you know what is wrong/ what I am doing wrong please let me know.
    Thank you.
    PS:  I am using iMovie 10.0.6.

    /*line 957 error */
         public void select()
              for (count = 0; count <= p; count ++ )
                   if(P[count] != null){ /* validation */
                   m = (int)(P[count].getX());
                   n = (int)(P[count].getY());
                   if (Math.pow(-1, m + n) == 1)
                        piece[m][n].setBackground(wselect);
                   else
                        piece[m][n].setBackground(bselect);
              step = 2;
         }

  • Question regarding palcing cache related classes into a package

    Hi all,
    I have a question regarding placing classes into packages. Actually I am writing cache feature which caches the results which were evaluated previously. Since it is a cache, I don't want to expose it outside because it is only for internal purpose. I have 10 classes related to this cache feature. All of them are used by cache manager (the manager class which manages cache) only. So I thought it would make sense if I keep all the classes into a separate package.
    But the problem I have is, since the cache related classes are not exposed outside so I can't make them public. If they are not public I can't access them in the other packages of my code. I can't either make it public or private. Can someone suggest a solution for my problem?

    haki2 wrote:
    But the problem I have is, since the cache related classes are not exposed outside so I can't make them public. If they are not public I can't access them in the other packages of my code.Well, you shouldn't access them in your non-cache code.
    As far as I understand, the only class that other code needs to access is the cache manager. That one must be public. All other classes can be package-private (a.k.a default access). This way they can access each other and the cache manager can access them, but other code can't.

  • Question regarding DocumentDB RU consumption when inserting documents & write performance

    Hi guys,
    I do have some questions regarding the DocumentDB Public Preview capacity and performance quotas:
    My use case is the following:
    I need to store about 200.000.000 documents per day with a maximum of about 5000 inserts per second. Each document has a size of about 200 Byte.
    According to to the documentation (http://azure.microsoft.com/en-us/documentation/articles/documentdb-manage/) i understand that i should be able to store about 500 documents per second with single inserts and about 1000 per second with a batch insert using
    a stored procedure. This would result in the need of at least 5 CUs just to handle the inserts.
    Since one CU consists of 2000 RUs i would expect the RU usage to be about 4 RUs per single document insert or 100 RUs for a single SP execution with 50 documents.
    When i look at the actual RU consumption i get values i don’t really understand:
    Batch insert of 50 documents: about 770 RUs
    Single insert: about 17 RUs
    Example document:
    {"id":"5ac00fa102634297ac7ae897207980ce","Type":0,"h":"13F40E809EF7E64A8B7A164E67657C1940464723","aid":4655,"pid":203506,"sf":202641580,"sfx":5662192,"t":"2014-10-22T02:10:34+02:00","qg":3}
    The consistency level is set to “Session”.
    I am using the SP from the example c# project for batch inserts and the following code snippet for single inserts:
    await client.CreateDocumentAsync(documentCollection.DocumentsLink, record);
    Is there any flaw in my assumption (ok…obviously) regarding the throughput calculation or could you give me some advice how to achieve the throughput stated in the documentation?
    With the current performance i would need to buy at least 40 CUs which wouldn’t be an option at all.
    I have another question regarding document retention:
    Since i would need to store a lot of data per day i also would need to delete as much data per day as i insert:
    The data is valid for at least 7 days (it actually should be 30 days, depending on my options with documentdb). 
    I guess there is nothing like a retention policy for documents (this document is valid for X day and will automatically be deleted after that period)?
    Since i guess deleting data on a single document basis is no option at all i would like to create a document collection per day and delete the collection after a specified retention period.
    Those historic collections would never change but would only receive queries. The only problem i see with creating collections per day is the missing throughput:
    As i understand the throughput is split equally according to the number of available collections which would result in “missing” throughput on the actual hot collection (hot meaning, the only collection i would actually insert documents).
    Is there any (better) way to handle this use case than buy enough CUs so that the actual hot collection would get the needed throughput?
    Example: 
    1 CU -> 2000 RUs
    7 collections -> 2000 / 7 = 286 RUs per collection (per CU)
    Needed throughput for hot collection (values from documentation): 20.000
    => 70 CUs (20.000 / 286)
    vs. 10 CUs when using one collection and batch inserts or 20 CUs when using one collection and single inserts.
    I know that DocumentDB is currently in preview and that it is not possible to handle this use case as is because of the limit of 10 GB per collection at the moment. I am just trying to do a POC to switch to DocumentDB when it is publicly available. 
    Could you give me any advice if this kind of use case can be handled or should be handled with documentdb? I currently use Table Storage for this case (currently with a maximum of about 2500 inserts per second) but would like to switch to documentdb since i
    had to optimize for writes per second with table storage and do have horrible query execution times with table storage because of full table scans.
    Once again my desired setup:
    200.000.000 inserts per day / Maximum of 5000 writes per second
    Collection 1.2 -> Hot Collection: All writes (max 5000 p/s) will go to this collection. Will also be queried.
    Collection 2.2 -> Historic data, will only be queried; no inserts
    Collection 3.2 -> Historic data, will only be queried; no inserts
    Collection 4.2 -> Historic data, will only be queried; no inserts
    Collection 5.2 -> Historic data, will only be queried; no inserts
    Collection 6.2 -> Historic data, will only be queried; no inserts
    Collection 7.2 -> Historic data, will only be queried; no inserts
    Collection 1.1 -> Old, so delete whole collection
    As a matter of fact the perfect setup would be to have only one (huge) collection with an automatic document retention…but i guess this won’t be an option at all?
    I hope you understand my problem and give me some advice if this is at all possible or will be possible in the future with documentdb.
    Best regards and thanks for your help

    Hi Aravind,
    first of all thanks for your reply regarding my questions.
    I sent you a mail a few days ago but since i did not receive a response i am not sure it got through.
    My main question regarding the actual usage of RUs when inserting documents is still my main concern since i can not insert nearly
    as many documents as expected per second and CU.
    According to to the documentation (http://azure.microsoft.com/en-us/documentation/articles/documentdb-manage/)
    i understand that i should be able to store about 500 documents per second with single inserts and about 1000 per second with a batch insert using a stored procedure (20 batches per second containing 50 documents each). 
    As described in my post the actual usage is multiple (actually 6-7) times higher than expected…even when running the C# examples
    provided at:
    https://code.msdn.microsoft.com/windowsazure/Azure-DocumentDB-NET-Code-6b3da8af/view/SourceCode
    I tried all ideas Steve posted (manual indexing & lazy indexing mode) but was not able to enhance RU consumption to a point
    that 500 inserts per second where nearly possible.
    Here again my findings regarding RU consumption for batch inserts:
    Automatic indexing on: 777
    RUs for 50 documents
    Automatic indexing off &
    mandatory path only: 655
    RUs for 50 documents
    Automatic indexing off & IndexingMode Lazy & mandatory path only:  645 RUs for
    50 documents
    Expected result: approximately 100
    RUs (2000 RUs => 20x Batch insert of 50 => 100 RUs per batch)
    Since DocumentDB is still Preview i understand that it is not yet capable to handle my use case regarding throughput, collection
    size, amount of collections and possible CUs and i am fine with that. 
    If i am able to (at least nearly) reach the stated performance of 500 inserts per second per CU i am totally fine for now. If not
    i have to move on and look for other options…which would also be “fine”. ;-)
    Is there actually any working example code that actually manages to do 500 single inserts per second with one CUs 2000 RUs or is
    this a totally theoretical value? Or is it just because of being Preview and the stated values are planned to work.
    Regarding your feedback:
    ...another thing to consider
    is if you can amortize the request rate over the average of 200 M requests/day = 2000 requests/second, then you'll need to provision 16 capacity units instead of 40 capacity units. You can do this by catching "RequestRateTooLargeExceptions" and retrying
    after the server specified retry interval…
    Sadly this is not possible for me because i have to query the data in near real time for my use case…so queuing is not
    an option.
    We don't support a way to distribute throughput differently across hot and cold
    collections. We are evaluating a few solutions to enable this scenario, so please do propose as a feature at http://feedback.azure.com/forums/263030-documentdb as this helps us prioritize
    feature work. Currently, the best way to achieve this is to create multiple collections for hot data, and shard across them, so that you get more proportionate throughput allocated to it. 
    I guess i could circumvent this by not clustering in “hot" and “cold" collections but “hot" and “cold"
    databases with one or multiple collections (if 10GB will remain the limit per collection) each if there was a way to (automatically?) scale the CUs via an API. Otherwise i would have to manually scale down the DBs holding historic data. I
    also added a feature requests as proposed by you.
    Sorry for the long post but i am planning the future architecture for one of our core systems and want to be sure if i am on
    the right track. 
    So if you would be able to answer just one question this would be:
    How to achieve the stated throughput of 500 single inserts per second with one CUs 2000 RUs in reality? ;-)
    Best regards and thanks again

  • I got a question regarding running iOs and Windows using virtual software.

    Greetings!
    I got a question regarding running iOs and Windows using virtual software. I recently bought a monitor so I can display Windows on it and run both OS at the same time.Now,I'm using BootCamp. I downloaded VirtualBox for "tranferring" Windows on it. Since I'm a new iOs user , what do I need to do in order to make work?
    Do I need to un-install Windows from BootCamp,install VirtualBox and then install them again?
    Any information would be appreciate it!

    That should work to use OS X, you can run MS Windows using Virtual Box and use a seperate display for the Virtual Box window. That way you can run OS X and MS Windows simultaneously. However remember Virtual Box is freeware and not a commercial application like Parallels or VMWare Fusion and may not have the features of a commercial application. Support for Windows run in any virtualization application (Virtual Box, Parallels or Fusion) is not generally done on this forum as they are not OS X related. To get help on those apps you usually will need to go to their forums.
    Remember IOS will not run on either OS X or MS Windows, it only works on IOS devices.
    Good luck with your installation.

  • Question regarding pse 8

    When I have multiple pictures up, how do I keep two pictures from merging when I move one of the pictures to the side?  I would like to turn this feature off.
    Thanks.

    I opened 4 photos into Adobe Photoshop Elements 8 and have set the pictures to "float in all windows" so I can easily grab any one of them and move them to compare pictures.What happens is that the picture I am moving fades and suddenly I have two pictures, one on top of the other in one frame.  In the bar at the top of the "merged" picture, it shows the file names of two pictures.  From that top bar, I can undo the "merge" by grabbing one of the pictures and dragging it to the left or right, and I am back to the two original pictures.  I would like to be able to move pictures around without two frames becoming one frame with two pictures layered.  Hope that is a better explanation. Thanks. DianeDate: Mon, 29 Oct 2012 12:47:56 -0600
    From: [email protected]
    To: [email protected]
    Subject: Question regarding pse 8
        Re: Question regarding pse 8
        created by hatstead in Photoshop Elements - View the full discussion
    I don't understand. Multiple pictures up - what does that mean?2 pictures merging when you move one to the side - how do you make them merge and move to the side?Feature off - when does this feature come on?
         Please note that the Adobe Forums do not accept email attachments. If you want to embed a screen image in your message please visit the thread in the forum to embed the image at http://forums.adobe.com/message/4808688#4808688
         Replies to this message go to everyone subscribed to this thread, not directly to the person who posted the message. To post a reply, either reply to this email or visit the message page: http://forums.adobe.com/message/4808688#4808688
         To unsubscribe from this thread, please visit the message page at http://forums.adobe.com/message/4808688#4808688. In the Actions box on the right, click the Stop Email Notifications link.
         Start a new discussion in Photoshop Elements by email or at Adobe Community
      For more information about maintaining your forum email notifications please go to http://forums.adobe.com/message/2936746#2936746.

  • Several questions regarding File Vault

    Hi!
    I have several questions regarding File Vault - right now I'm using Mac OS 10.4.8
    1.: The battery lock of my iBook is defect thus it happens from time to time that while transporting it the battery drops out while the laptop is sleeping. What happens with the File Vault-disk image?
    2.: I want to (have to ) set up my Intel iMac again. The installer-CD I have will bring it back to 10.4.6
    AFAIK the data format used for File Vault since 10.4.7 is version 2. What happens if I encrypt my stuff now (10.4.8 - thus version 2), back it up to my backup disc, install a new system (10.4.6 - therefore version 1) and want to access my data via Migration Manager (don't want to use archive and install)?
    3.: How do I actually do a backup of my data while the system is running? The backup should be encrypted as well.
    I use the demo-version of SuperDuper for backing up my system because with it I can ensure that I have a complete bootable backup of my running system.
    Thanks for your answers in advance
    ibook g4 12" 1.2 GHz 768 MB RAM / Intel iMac Core 2 Duo 17" 2.0 GHz 2GB RAM   Mac OS X (10.4.8)  

    Parker,
    You said:1. If it did, Apple would not use FileVault, as everyones computer will have a battery problem once in their life, and Apple would lose buisness from angry people who lost all of their data.I have seen enough reports of data loss with FileVault that I feel compelled to dispute your statement.
    In Data corruption and loss: causes and avoidance, Dr. Smoke writes...If your data-security needs demand FileVault, you should backup your encrypted Home folder regularly, preferably daily. Like any hard drive or disk image, a Home folder protected by FileVault — an encrypted, sparse disk image — does not respond well to the causes of data corruption...Loss of power definitely is a cause of data corruption.
    For Niels....,
    An Unencrypted Look at FileVault, by François Joseph de Kermadec is an excellent discussion of the features, pitfalls, and cautions regarding Filevault.
    Although the article discusses Panther and is dated 12/19/2003, the concepts as they apply Tiger have not changed.
    The cautions and warnings are prominent in any of the Apple Knowledge Base articles referring to the use of FileVault. If a user is unfamiliar with any aspect of FileVault, it should not in my opinion be activated.
    As good as FileVault is in protecting your sensitive data, it also presents the danger of locking up your files in an irretrievable ball of one's and zero's. Backups are critical. You must ensure that you have a comprehensive backup plan. Backup and Recovery, by Dr. Smoke is a fine example of what you need to consider.
    ;~)

  • A few questions regarding SAP EWM and WM

    Hello,
    I have a few general questions regarding the differences between EWM and WM:
    1) What are the benefits of EWM-MFS compared to WM + TRM (especially in terms of SPS)?
    2) The Quality Inspection Engine (QIE) can also be used by SAP WM, right?
    3) There is RFID-support in EWM, so EWM is able to communicate directly with SAP Auto-ID, right?
         But I have heard that SAP PI is necessary in some cases, when and why?
    4) Is there something new in EWM regarding goods receipt processing?
        I have read that the splitting of inbound delivery items is possible in EWM in case of missing inbound delivery items. Is this really  a new feature?
    5) EWM can easily be connected to SAP BW for reporting purposes, what about WM?
    6) What about scalability if the warehouse grows?
    7) Is there any information about the costs of using EWM compared to WM and vice versa?
    I appreciate any kind of help.
    Thank you.
    Dennis

    Hi,
    1. What does SAP offer as a product for dWM? Is it a u201Cspecialu201D installation of the SAP framework dedicated to WM or is it a standard ECC box where only the WM module is used?
    There are two version of DWM. One is Decentralized WM as a part of ECC and another one is EWM as a part of SCM. Both are decentralized.
    2. My understanding is that the interfaces between ERP and dWM can support some non-real time operations (like when the main ERP system is down, the dWM can still perform some operations). Considering that the transactional interfaces are based on BAPIs, how does SAP achieve this interfacing in non-real time environments? I am thinking you can complete the different processing unless both systems are up
    When it comes to interfaces, DWM needs Deliveries from ERP. That's it, WM can function from there independent of ERP system. But, WM defenitely needs to communicate back PGI and PGR and other posting changes . So, in case ERP is down, even though PGI / PGR is done at WM end, they may not be communicated back to ERP. But WM generates PGI/PGR IDOCs which can always be reprocessed at WM end to resend them to ERP so that Inventory levels are accurate.
    Hope that helps
    Thanks
    Vinod.

  • Wiget question regarding system usage

    I'm a recent convert from the PC world and am finding the dashboard feature very useful. I did, however have a question regarding the way wigets in the dashboard access the iMac's system resources.
    Specifically, I was wondering if a wiget is installed and appears in the "Manage Wigets" list but is NOT active on the dashboard, does that wiget still utilize the system's resources? (i.e. is it still actively updating its information or performing its task) Or does it "sleep" until you actively enable it in the dashboard?

    Welcome to Discussions!
    I believe widgets don't consume resources unless you have them open in Dashboard-just having them installed, but not open wouldn't consume resources.
    Since you're new to mac, you may want to check out Mac 101 and Switch 101.
    Message was edited by: joshz

  • Question Regarding MIDI and Sample Accuracy

    Hi,
    I have 2 questions regarding MIDI.
    1. MIDI is moved by ticks. In the arrange window however, you can move a region by samples. When doing this, you can move within values of the ticks (which you can see on your position box that pops up) Now, will this MIDI note actually be played back at that specific sample point, or will it round the event to the closest tick? (example, if I have a MIDI note directly on 1.1.1.1, and I move the REGION in the arrange... will that MIDI note now fall on the sample that I have moved the region to, or will it be rounded to the closest tick?)
    2. When making a midi template from an audio region, will the MIDI information land exactly on the sample of the transient, or will it be rounded to the closest tick?
    I've looked through the manual, and couldn't find any specific answer to these questions.
    Thanks!
    Message was edited by: Matthew Usnick

    Ok, I've done some experimenting, and here are my results.
    I believe those numbers ARE samples. I came to this conclusion by counting (for some reason it starts on 11) and cutting a region to be 33 samples long (so, minus 11, is 22 actual samples). I then went to the Audio Bin window, and chose to view region length as samples. And there it said it: 22 samples. So, you can in fact move MIDI regions by samples!
    Second, I wanted to see if the MIDI notes in the region itself would be quantized to the nearest tick. I cut a piece of audio, so it had a 1 sample attack (zoomed in asa far as I could in the sample editor, selected the smallest portion, and faded in, and made the start point, the region start position). I saved the region as a new audio file, and loaded it up in the exs sampler.
    I then made a MIDI region, with and triggered the sample on beat 1 (quantized, on the money). I then went into the arrange window, made a fixed cycle length, and bounced the audio. I then moved the MIDI region by one sample to the right. I did this 22 times (which is the number of samples in a tick, at 120, apparently). After bouncing all of these (cycle position remained fixed, only the MIDI region was moving) I imported all the audio into the arrange on new tracks, and YES!!! The sample start was cascaded by a sample each time!
    SO.
    Not only can you move MIDI regions by sample, but the positions are NOT quantized to Logics ticks!
    This is very good news, and glad I worked this out!
    (if anyone thinks this sounds wrong, please correct me, but I'm pretty sure I proved it, in my test)
    Message was edited by: Matthew Usnick

  • Question regarding homehub and Open reach router -...

    Hi all,
      I had infinity installed earlier this month and am happy with it so far. I do have a few questions regarding the service and hardware though.
      I run both my BT openreach router and BT Home hub from the same power socket. The problem is, if I turn the plug on so both the Homehub and Openreach Router start up at the same time, the home hub will never get an Internet connection from the router. To solve this I have to turn the BT home hub on first and leave it for a minute, then start the router up and it all works fine. I'm just curious if this is the norm or do I have some faulty hardware?
      Secondly, I appreciate the estimated speed BT quote isn't always accurate, I was quoted 49mbits down but received 38mbits down - Which I was happy with. Recently though it has dropped to 30. I am worried this might continue to drop over time. and as of present I am 20mbits down on the estimate . For the record 30mbits is actually fine and probably more than I would ever need. If I could boost it some how though I would be interested to hear from you.
    Thanks, .

    Just a clarification: the two boxes are the HomeHub (router, black) and the modem (white).  The HomeHub has its own power switch, the modem doesn't.
    There is something wrong if the HomeHub needs to be turned on before the modem.  As others have said, in general best to leave the modem on all the time.  You should be able to connect them up in any order, or together.  (For example, I recently tripped the mains cutout, and when I restored power the modem and HomeHub went on together and everything was ok).
    Check if the router can connect/disconnect from the broadband using the web interface.  Leaving the modem and HomeHub on all the time, go to http://192.168.1.254/ on a browser on a connected computer, and see whether the Connect/Disconnect button works.

  • Question regarding IWDTree and context Value Node naming

    Hi,
    I have a question regarding the IWDTree / IWDTreeNodeType components.
    I have a context looking like this:
    Context
      + ResponseNode
        + PersonNode (1..1)
          + PersonAddressNode                    (empty node, placeholder)
          | + AdresNode (0..n)
          + PersonChildNode                      (empty node, placeholder)
          | + PersonNode (0..n)
          |   + PersonAddressNode                (empty node, placeholder)
          |     + AddressNode (0..n)
          + PersonParentsNode                    (empty node, placeholder)
            + PersonNode (0..n)
              + PersonAddressNode                (empty node, placeholder)
                + AddressNode (0..n)
    The context represents a person, a person's address, and a person's children and parents with their respective addresses.
    As a result, on different branches, a PersonNode and AddressNode can appear.
    And for some strange reason, all PersonNodes and AddressNodes link to the same ResponseNode.PersonNode.PersonParentsNode.PersonNode and ResponseNode.PersonNode.PersonParentsNode.PersonNode.PersonAddressNode.AddressNode respectively, irregardless of their branch...
    Is it illegal to have multiple PersonNode and AddressNode node names, and should they be named uniquely?

    Generally, node names need to be unique inside the context, attributes in different nodes can have same names. I wonder if the context structure you described will result in code without compile errors.
    The WD Tree can only be used with recursive context nodes or with a hierarchy of non-singleton child nodes.
    Can you give an example how your tree should look like at runtime?

  • Question regarding roaming and data usage

    I am currently out of my main country of service, and as such I have a question regarding roaming and data usage.
    I am told that the airplane mode is sufficient from keeping the phone off from roaming, but does this apply to any background data usage for applications and such?
    If the phone is in airplane mode, are all use of the phone including wifi and application use through the wifi outside of all extra charges from roaming?

    Ann154 wrote:
    If you are getting charged to use the wifi, then it is possible.  Otherwise no
    Just to elaborate here, Ann154 is referring to access charges for wifi, which is nothing to do with Verizon, so if you are using it in a plane, hotel, an internet cafe etc that charges for Wifi rather than being free .   Verizon does not charge you (or indeed know about!) wifi usage, or any other usage that is not on their cellular network (such as using a foreign SIM for example in global phones)  So these charges, if any, will not show up on the verizon bill app.  Having it in airplane mode prevents all cellular data traffic so you should be fine

  • Question regarding MM and FI integration

    Hi Experts
    I have a question regarding MM and FI integration
    Is the transaction Key in OMJJ is same as OBYC transaction key?
    If yes, then why canu2019t I see transaction Key BSX in Movement type 101?
    Thanks

    No, they are not the same.  The movement type transaction (OMJJ) links the account key and account modifier to a specific movement types.  Transaction code (OBYC) contains the account assignments for all material document postings, whether they are movement type dependent or not.  Account key BSX is not movement type dependent.  Instead, BSX is dependent on the valuation class of the material, so it won't show in OMJJ.
    thanks,

Maybe you are looking for

  • Unable to find security data

    Hello, I am trying a scenario file--> xi ---> RFC , does anybody know what this message means. The problem appear when I send a flat file to XI.  My question is, do I need to make some extra settings ? This is the error am getting. <?xml version="1.0

  • Authentication failed network

    Hello friends, I have a customer who changed his core switch network by another model, Allied x900-24xs, after this change some workstations are experiencing the problem of authentication in the network, the problem happens randomly, with stations th

  • Windows cannot connect to the User Profile Service service

    On startup error message 'Windows cannot connect to the User Profile Service service' appears and windows will not start. Windows starts in safe mode. No corrupted user profiles. No viruses detected. Cannot resolve the problem. Any ideas?

  • Xref_data in oracle soa suite 11g

    Hi, All the actual data will be stored in xref_data.We have created one more table by name xref_data_account which will be the same structure of xref_data. But when we tried to populating in xref_data_account we get an error that "INF" column is not

  • STOP POSTING TO PROFIT CENTER BY VALIDATION RULE

    Dear all gurus, I am being asked by my user how to stop posting into a profit center.After searching the forum, i found an answer :"For blocking the posting to profit center, use validation rule with a check that (Profit Center=12345)->FALSE" I do no