Architecture Question regarding RPD (Best design practice)

Hello, I need some design help regarding the best way to build a RPD for a Bank. Your help / guidance is greatly appreciated as always:
Following is data (example)
Revenue ALL (1 million record)
Revenue by filter A ONLY (250k)
Revenue by filter B ONLY (50k)
Revenue by filter C ONLY (150k)
Revenue by filter D ONLY (25k)
Requirement:
Report Revenue ALL
Report Revenue % A of ALL = (250k / 1 million) * 100
Report Revenue % B of ALL = (50k / 1 million) * 100
Report Revenue % C of ALL = (150k / 1 million) * 100
Report Revenue % D of ALL = (25k / 1 million) * 100
Should i build this from a single FACT source or should i have something like the following:
Source one: Revenue ALL
Source two: Revenue by filter A ONLY (250k)
Source three: Revenue by filter B ONLY (50k)
Source four: Revenue by filter C ONLY (150k)
Source five: Revenue by filter D ONLY (25k)
All of these will have ONE common Bank dimension allowing me to join across if needed.
Essentially, the question is, should i use a single source table containing ALL data or have multiple sources each providing me exactly what i am looking for?
Thanks,
Jes

I would use single source data at ALL level and then filter it as needed.
user
100.00 * count (column filterd by ..) / count (column)
to get your percentages.

Similar Messages

  • Best Design practices of SAN with the MDS 9513,MDS 9509 and Brocade 8510

    Hi
          Am searching best design to implement CISCO MDS 9513,9509, Brocade 8510,Storage and UCS all clubbed in the topology. And please suggest me any tool to compare MDS and Brocade 8510 performance.

    Boomi,
    Both MDS and Brocade will serve basic features of Storage networking. Both can be mix and match to achieve redundancy, which you already have. However, if you are looking for any tool or perfmon then there isn't much to compare. You can use IOmeter or Akkori. I see that you have enterprise level hardware in your setup. Not sure what other line cards you have installed and what application are running through, and if you have remote sites (SAN islands) then the real difference of features and best practices can be discussed. For example, IVR, FCIP, ISCSI, FCoE, etc.
    Thanks,
    Nisar
    Sent from Cisco Technical Support iPad App

  • Need help regarding mm best purchasing practices..

    hi
    could any one tell me about the sap mm best purchasing practices.

    Here find below for the best practices that can be considered in General
    1) Master Data
    a) Central Master data with uniform naming convention
    b) Master Maintenance tool (MDM) with workflow for approval
    2) Collection of Requirements from User departments and Approval-
    a) Creation of PR in SAP and Release strategy with work flow
    b) PR creators and PR Release authorities (work flow) are linked with HR org structure.
    c) Approval/Notifications for in action/Reminders can be routed with the help of workflow.
    3) Budget Control for PR-
    a) By activating Funds Management OR
    b) By Internal Order OR
    c) Customer Exit to validate The PR value/Price according to allocated budget
    4) Buyer activity
    a) Automatic source determination and automatic creation of PO with release strategy.
    b) PO creation with reference to Long term contract (Release orders) for regular vendors plus Global contract
    c) Uniform way of raising PO with terms and conditions
    c) RFQ for Bidding Process plus release strategy.
    d) Vendor rating
    e) PO Price Difference Tolerance check/Validation in comparison with PR.
    5) GRN
    a) Central warehouse
    b) Acceptance of Materials subject to quality check
    c) Normal Service Procurement with Service Master 
    d) Service entry sheet with release strategy.
    5) Invoice
    a) Invoice Block parameters/Tolerances
    b) Invoice Approval work flow
    c) GR based invoice Verification for Normal Purchase.
    d) Activate Material Ledger to overcome the disadvantages of posting Invoice with Price difference in terms of valuation
    e) Uniform way of Invoice Posting/Payment
    f) ERS

  • Design/Architecture questions regarding streams, triggers, JMS, XML

    I need to devise a way to enable an application that I am currently developing to act upon a change to a database table and to generate a message to notify an external system of the change.
    The scenario I envisage is:
    - A 3rd party (a user or a batch job) modifies (INSERT, DELETE, UPDATE) a row in a given table.
    - There is a trigger that is fired by the modification which would put a message onto a Stream (thus the notification would be persistent if the database were to go down).
    - A Java server process would be running that continually checks the Stream (referenced above) for a message . When the message has been dropped in the queue by the trigger the Java server process would read the message and determine what had changed, and then it would generate an XML message that it would pass onto the external system to notify the change. NOTE : The external system would not have access to the database so the outbound XML message would contain all of the information that describes what has changed.
    This sounds simple enough to me, but there is a fair bit of "hand waving" around how this would actually work in practice!
    Can anyone provide any assistance with any Oracle infrastructure that I might be able to use to ease any of this? My main area of concern is how the trigger should indicate what has changed when a modification occurs. I could create the trigger to write a message in a simple (but proprietary format - or potentially XML) to the Stream queue, and then the Java server process could read the message (via OJMS) and parse the message and using JDBC determine what modification actually ocurred, but this could be quite a bit of work... Is there a smarter way to do this?
    Perhaps I could use XMLDB to allow the trigger to immediately render the change into XML format which would slightly ease the parsing that the Java server process has to do.
    Any help would be greatly appreciated!
    Thanks,
    James

    I need to devise a way to enable an application that I am currently developing to act upon a change to a database table and to generate a message to notify an external system of the change.
    The scenario I envisage is:
    - A 3rd party (a user or a batch job) modifies (INSERT, DELETE, UPDATE) a row in a given table.
    - There is a trigger that is fired by the modification which would put a message onto a Stream (thus the notification would be persistent if the database were to go down).
    - A Java server process would be running that continually checks the Stream (referenced above) for a message . When the message has been dropped in the queue by the trigger the Java server process would read the message and determine what had changed, and then it would generate an XML message that it would pass onto the external system to notify the change. NOTE : The external system would not have access to the database so the outbound XML message would contain all of the information that describes what has changed.
    This sounds simple enough to me, but there is a fair bit of "hand waving" around how this would actually work in practice!
    Can anyone provide any assistance with any Oracle infrastructure that I might be able to use to ease any of this? My main area of concern is how the trigger should indicate what has changed when a modification occurs. I could create the trigger to write a message in a simple (but proprietary format - or potentially XML) to the Stream queue, and then the Java server process could read the message (via OJMS) and parse the message and using JDBC determine what modification actually ocurred, but this could be quite a bit of work... Is there a smarter way to do this?
    Perhaps I could use XMLDB to allow the trigger to immediately render the change into XML format which would slightly ease the parsing that the Java server process has to do.
    Any help would be greatly appreciated!
    Thanks,
    James

  • Best design practices

    Which is preferable in design? I want your opinion so I can design better apps. The jsp below shows the amount of interest in an already instantiated class with the proper data loaded. I just want to show the interest to the user via jsp, but whats the best way to provide the flexibility in coding? providing more methods that do the same thing basically seems redundant and harder to maintain. Whats your opinion on which of the 5 below should I use?
    creditSummer.jsp
    <td><%=credit.getCurrentBalance()*credit.getApr()/100/365%></td>
    <td><%=credit.getCurrentBalance()*credit.getApr()/100/365*7%></td>
    <td><%=credit.getCurrentBalance()*credit.getApr()/100/365*14%></td>
    <td><%=credit.getCurrentBalance()*credit.getApr()/100/12%></td>
    or
    <td><%=credit.getInterestPerDay()%></td>
    <td><%=credit.getInterestPerWeek()%></td>
    <td><%=credit.getInterestPerTwoWeeks()%></td>
    <td><%=credit.getInterestPerMonth()%></td>
    or
    <td><%=credit.getInterest(Credit.PERDAY)%></td>
    <td><%=credit.getInterest(Credit.PERWEEK)%></td>
    or
    <td><%=credit.getInterest("Per Day")%></td>
    <td><%=credit.getInterest("Per Week")%></td>
    or
    <%// method signature: Credit.getInterest(int days)%>
    <td><%=credit.getInterest(1)%></td>
    <td><%=credit.getInterest(7)%></td>
    <td><%=credit.getInterest(14)%></td> --Gregory                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                       

    <td><%=credit.getInterest(1)%></td>
    <td><%=credit.getInterest(7)%></td>
    <td><%=credit.getInterest(14)%></td>
    I prefer to use the above siginature in the JSP. Instead of doing all calculation in the view page, we can put the calculation logic in the Beans,
    just use jsp to display data to the user
    baiju

  • Questions regarding the best module could be studied

    Hello ,, I found a teacher will give me a course in the oracle EBS , and he choice me two programs:
    1- Study financial Module then the projects module (bills – management - costing )
    2- Study financial Module then the supply chain then manufacturing (BOM , process … etc )
    he will give me information on the databases and give me the general details on the administration of the system .
    The question now:
    1 - Which is better in the market that I am training to become a system administrator in a company or to become a system implementer .
    2 – which is better study first program in the project field or the second program in the manufacturing ,
    FYI , my previous knowledge that I have experience in the both fields ( projects & manufacturing ) cause my work as Director of computers in a company.
    Thank you in advance

    Hi user;
    You need to decide what you want to do in future, which one comes to more closer to you to be system admin or working on client side as functional consultant. Those subjects totaly different.
    Please also check below links:
    EBS-Career
    Re: Oracle APPS Career ... Technical or Functional Consultant???
    Regard
    Helios

  • Architecture question regarding record management

    We are designing a contract management system in SharePoint 2013. The contracts are separated by departments and by locations. Users in a particular department/location should only have access to contracts in their own department/location.
    We are thinking to create 1 site collection for each department - this ensures that we stay within the 200 GB content database recommendation. In each department site collection, we will have 1 document library with 1 folder for each location - permission inheritance
    is broken on each folder to restrict access on a location basis. We thought of 1 document library so that we can use out-of-box SharePoint views for reports across locations for executives to see.
    The problem with this design is that there are about 250 different locations. This means, in each department site collection, we will have to create 250 folders in our document library and break inheritance on the 250 folders.
    Another approach would be to give no one access to the document library & use a custom drop off library form to add documents. We can create a web part which elevates permissions and displays only those documents that users are supposed to see. The advantage
    with this approach is much less broken permission inheritance. The disadvantage is that we won't be able to use OOTB views and we'll have to implement our own search.
    Thoughts appreciated!

    You could consider "Audience Targeting". It works well with the Content Query WebPart. You can enable it on one document library and it provides a TargetAudience column allowing you to assign groups/roles to the content. This could get around the
    creation of separate site collections and breaking permission inheritance.
    http://technet.microsoft.com/en-us/library/cc261958(v=office.14).aspx#Section4
    Blog | SharePoint Field Notes Dev Tools |
    SPFastDeploy | SPRemoteAPIExplorer

  • Where to post Architecture question regarding Hyp Plan/Essbase installation

    *1st Installation (uses HypDB1 and Hypapp1 server) ----- PRODUCTION*
    Server HypDB1: <------ Server Hypapp1
    ESSBASE1 Hyp Plan1
    HYP Shared Service1 Studio1
    EAS1 Apache
    ODI Workspace1
    Oracle 11g
    *2nd Installation (uses HypDB2 and Hypapp2 server BUT Shares) ---- PROPOSED ADD ON TO PRODUCTION*
    BUT USES ORACLE11G of  1st Installation
    Server HypDB2: <------ Server Hypapp2
    ESSBASE2 Hyp Plan2
    HYP Shared Service2 Studio2
    EAS2 Apache
    ODI Workspace2
    Is the above possible?

    Another idea I found from this app note.
    http://www.xjtag.com/app-note-16.php
    But there is a note in this app note that speaks to the DONE signal.
    http://www.xjtag.com/app-note-14.php
    "When the programming operation of a Xilinx FPGA completes it will toggle its DONE signal; if this occurs when it is not expected then the PROM or processor that configures the FPGA can automatically re-start the process of programming the FPGA with its functional image (undoing the clearing that has just been done through XJTAG)."
    I couldn't find any info about this in the 7 Series Config User Guide. Does this apply to the 7 Series FPGAs?

  • Architecture question, global VDI deployment

    I have an architecture question regarding the use of VDI in a global organization.
    We have a pilot VDI Core w/remote mysql setup with 2 hypervisor hosts. We want to bring up 2 more Hypervisor hosts (and VDI Secondaries) in another geographic location, where the local employees would need to connect desktops hosted from their physical location. What we don't want is to need to manage multiple VDI Cores. Ideally we would manage the entire VDI implementation from one pane of glass, having multiple Desktop Provider groups to represent the geographical locations.
    Is it possible to just setup VDI Additional Secondaries in the remote locations? What are the pros and cons of that?
    Thanks

    Yes, simply bind individual interfaces for each domain on your web server,
    one for each.
    Ensure the appropriate web servers are listening on the appropriate
    interfaces and it will work fine.
    "Paul S." <[email protected]> wrote in message
    news:407c68a1$[email protected]..
    >
    Hi,
    We want to host several applications which will be accessed as:
    www.oursite.com/app1 www.oursite.com/app2 (all using port 80 or 443)
    Is it possible to have a separate Weblogic domain for each application,all listening
    to ports 80 and 443?
    Thanks,
    Paul

  • Quick question regarding best practice and dedicating NIC's for traffic seperation.

    Hi all,
    I have a quick question regarding best practice and dedicating NIC's for traffic seperation for FT, NFS, ISCSI, VM traffic etc.  I get that its best practice to try and separate traffic where you can and especially for things like FT however I just wondered if there was a preferred method to achieving this.  What I mean is ...
    -     Is it OK to have everything on one switch but set each respective portgroup to having a primary and failover NIC i.e FT, ISCSI and all the others failover (this would sort of give you a backup in situations where you have limited physical NICs.
    -    Or should I always aim to separate things entirely with their own respective NICs and their own respective switches?
    During the VCAP exam for example (not knowing in advance how many physical NIC's will be available to me) how would I know which stuff I should segregate on its own separate switch?  Is there some sort of ranking order of priority /importance?  FT for example I would rather not stick on its own dedicated switch if I could only afford to give it a single NICs since this to me seems like a failover risk.

    I know the answer to this probably depends on however many physical NICs you have at your disposal however I wondered if there are any golden 100% rules for example FT must absolutely be on its own switch with its own NICs even at the expence of reduced resiliency should the absolute worst happen?  Obviously I know its also best practice to seperate NICs by vender and hosts by chassis and switch etc 

  • Question regarding Command pattern

    Hi!
    I have a question regarding the Command pattern:
    //Invoker as defined in GOF Design Patterns
    public class SomeServer {
        //Receiver as defined in GOF Design Patterns.
        private Receiver receiver;
        //Request from a network client.
        public void service(SomeRequest request) {
            Command cmd = CommandFactory.createCommand(request);
            cmd.execute();
    }The concrete command which implements the Command needs a reference to Receiver in order to execute it's operation but how is the concrete command best configured? Should I send Receiver along with the request
    as a parameter to the createCommand method or should i configure the receiver inside the CommandFactory or
    send it as a paramter to the execute method? Since SomeServer acts as both client and invoker, SomeServer "knows" about the Commands receiver. Is this a bad
    thing?
    Regards
    /Fredrik

    #!/bin/bash
    DATE=$(date '+%y-%m-%d')
    if find | grep -q $DATE ; then
    echo "OK - Backup files found"
    exit 0
    else
    echo "Critical - No Backups found today!"
    exit 2
    fi
    should work too and it's a bit shorter.
    Please remember to mark the thread as solved.

  • Question regarding DocumentDB RU consumption when inserting documents & write performance

    Hi guys,
    I do have some questions regarding the DocumentDB Public Preview capacity and performance quotas:
    My use case is the following:
    I need to store about 200.000.000 documents per day with a maximum of about 5000 inserts per second. Each document has a size of about 200 Byte.
    According to to the documentation (http://azure.microsoft.com/en-us/documentation/articles/documentdb-manage/) i understand that i should be able to store about 500 documents per second with single inserts and about 1000 per second with a batch insert using
    a stored procedure. This would result in the need of at least 5 CUs just to handle the inserts.
    Since one CU consists of 2000 RUs i would expect the RU usage to be about 4 RUs per single document insert or 100 RUs for a single SP execution with 50 documents.
    When i look at the actual RU consumption i get values i don’t really understand:
    Batch insert of 50 documents: about 770 RUs
    Single insert: about 17 RUs
    Example document:
    {"id":"5ac00fa102634297ac7ae897207980ce","Type":0,"h":"13F40E809EF7E64A8B7A164E67657C1940464723","aid":4655,"pid":203506,"sf":202641580,"sfx":5662192,"t":"2014-10-22T02:10:34+02:00","qg":3}
    The consistency level is set to “Session”.
    I am using the SP from the example c# project for batch inserts and the following code snippet for single inserts:
    await client.CreateDocumentAsync(documentCollection.DocumentsLink, record);
    Is there any flaw in my assumption (ok…obviously) regarding the throughput calculation or could you give me some advice how to achieve the throughput stated in the documentation?
    With the current performance i would need to buy at least 40 CUs which wouldn’t be an option at all.
    I have another question regarding document retention:
    Since i would need to store a lot of data per day i also would need to delete as much data per day as i insert:
    The data is valid for at least 7 days (it actually should be 30 days, depending on my options with documentdb). 
    I guess there is nothing like a retention policy for documents (this document is valid for X day and will automatically be deleted after that period)?
    Since i guess deleting data on a single document basis is no option at all i would like to create a document collection per day and delete the collection after a specified retention period.
    Those historic collections would never change but would only receive queries. The only problem i see with creating collections per day is the missing throughput:
    As i understand the throughput is split equally according to the number of available collections which would result in “missing” throughput on the actual hot collection (hot meaning, the only collection i would actually insert documents).
    Is there any (better) way to handle this use case than buy enough CUs so that the actual hot collection would get the needed throughput?
    Example: 
    1 CU -> 2000 RUs
    7 collections -> 2000 / 7 = 286 RUs per collection (per CU)
    Needed throughput for hot collection (values from documentation): 20.000
    => 70 CUs (20.000 / 286)
    vs. 10 CUs when using one collection and batch inserts or 20 CUs when using one collection and single inserts.
    I know that DocumentDB is currently in preview and that it is not possible to handle this use case as is because of the limit of 10 GB per collection at the moment. I am just trying to do a POC to switch to DocumentDB when it is publicly available. 
    Could you give me any advice if this kind of use case can be handled or should be handled with documentdb? I currently use Table Storage for this case (currently with a maximum of about 2500 inserts per second) but would like to switch to documentdb since i
    had to optimize for writes per second with table storage and do have horrible query execution times with table storage because of full table scans.
    Once again my desired setup:
    200.000.000 inserts per day / Maximum of 5000 writes per second
    Collection 1.2 -> Hot Collection: All writes (max 5000 p/s) will go to this collection. Will also be queried.
    Collection 2.2 -> Historic data, will only be queried; no inserts
    Collection 3.2 -> Historic data, will only be queried; no inserts
    Collection 4.2 -> Historic data, will only be queried; no inserts
    Collection 5.2 -> Historic data, will only be queried; no inserts
    Collection 6.2 -> Historic data, will only be queried; no inserts
    Collection 7.2 -> Historic data, will only be queried; no inserts
    Collection 1.1 -> Old, so delete whole collection
    As a matter of fact the perfect setup would be to have only one (huge) collection with an automatic document retention…but i guess this won’t be an option at all?
    I hope you understand my problem and give me some advice if this is at all possible or will be possible in the future with documentdb.
    Best regards and thanks for your help

    Hi Aravind,
    first of all thanks for your reply regarding my questions.
    I sent you a mail a few days ago but since i did not receive a response i am not sure it got through.
    My main question regarding the actual usage of RUs when inserting documents is still my main concern since i can not insert nearly
    as many documents as expected per second and CU.
    According to to the documentation (http://azure.microsoft.com/en-us/documentation/articles/documentdb-manage/)
    i understand that i should be able to store about 500 documents per second with single inserts and about 1000 per second with a batch insert using a stored procedure (20 batches per second containing 50 documents each). 
    As described in my post the actual usage is multiple (actually 6-7) times higher than expected…even when running the C# examples
    provided at:
    https://code.msdn.microsoft.com/windowsazure/Azure-DocumentDB-NET-Code-6b3da8af/view/SourceCode
    I tried all ideas Steve posted (manual indexing & lazy indexing mode) but was not able to enhance RU consumption to a point
    that 500 inserts per second where nearly possible.
    Here again my findings regarding RU consumption for batch inserts:
    Automatic indexing on: 777
    RUs for 50 documents
    Automatic indexing off &
    mandatory path only: 655
    RUs for 50 documents
    Automatic indexing off & IndexingMode Lazy & mandatory path only:  645 RUs for
    50 documents
    Expected result: approximately 100
    RUs (2000 RUs => 20x Batch insert of 50 => 100 RUs per batch)
    Since DocumentDB is still Preview i understand that it is not yet capable to handle my use case regarding throughput, collection
    size, amount of collections and possible CUs and i am fine with that. 
    If i am able to (at least nearly) reach the stated performance of 500 inserts per second per CU i am totally fine for now. If not
    i have to move on and look for other options…which would also be “fine”. ;-)
    Is there actually any working example code that actually manages to do 500 single inserts per second with one CUs 2000 RUs or is
    this a totally theoretical value? Or is it just because of being Preview and the stated values are planned to work.
    Regarding your feedback:
    ...another thing to consider
    is if you can amortize the request rate over the average of 200 M requests/day = 2000 requests/second, then you'll need to provision 16 capacity units instead of 40 capacity units. You can do this by catching "RequestRateTooLargeExceptions" and retrying
    after the server specified retry interval…
    Sadly this is not possible for me because i have to query the data in near real time for my use case…so queuing is not
    an option.
    We don't support a way to distribute throughput differently across hot and cold
    collections. We are evaluating a few solutions to enable this scenario, so please do propose as a feature at http://feedback.azure.com/forums/263030-documentdb as this helps us prioritize
    feature work. Currently, the best way to achieve this is to create multiple collections for hot data, and shard across them, so that you get more proportionate throughput allocated to it. 
    I guess i could circumvent this by not clustering in “hot" and “cold" collections but “hot" and “cold"
    databases with one or multiple collections (if 10GB will remain the limit per collection) each if there was a way to (automatically?) scale the CUs via an API. Otherwise i would have to manually scale down the DBs holding historic data. I
    also added a feature requests as proposed by you.
    Sorry for the long post but i am planning the future architecture for one of our core systems and want to be sure if i am on
    the right track. 
    So if you would be able to answer just one question this would be:
    How to achieve the stated throughput of 500 single inserts per second with one CUs 2000 RUs in reality? ;-)
    Best regards and thanks again

  • Best business practices

    I want to know the best business practices suggested by SAP  to improve client revenues.

    11,
    Your question is vague.  For an equally vague answer, try the SAP Best Practices web site.
    http://help.sap.com/bestpractices
    ERP Baseline BP Building blocks, localized for India, can be found at
    http://help.sap.com/saap/sap_bp/BL_ERP605_IN/html/Content_Library_BL_EN_IN.htm
    Best Regards,
    DB49

  • Inheritance architecture question

    Hello,
    I've an architecture question.
    We have different types of users in our system, normal users, company "users", and some others.
    In theory they all extend the normal user. But I've read alot about performance issues using join based inheritance mapping.
    How would you suggest to design this?
    Expected are around 15k normal users, a few hundred company users, and even a few hundred of each other user type.
    Inheritance mapping? Which type?
    No inheritance and append all attributes to one class (and leave these not used by the user-type null)?
    Other ways?
    thanks
    Dirk

    sorry dude, but there is only one way you are going to answer your question: research it. And that means try it out. Create a simple prototype setup where you have your inheritance structure and generate 15k of user data in it - then see what the performance is like with some simple test cases. Your prototype could be promoted to be the basis of the end product if the results or satisfying. If you know what you are doing this should only be a couple of hours of work - very much worth your time because it is going to potentially save you many refactoring hours later on.
    You may also want to experiment with different persistence providers by the way (Hibernate, Toplink, Eclipselink, etc.) - each have their own way to implement the same spec, it may well be that one is more optimal than the other for your specific problem domain.
    Remember: you are looking for a solution where the performance is acceptable - don't waste your time trying to find the solution that has the BEST performance.

  • Question regarding homehub and Open reach router -...

    Hi all,
      I had infinity installed earlier this month and am happy with it so far. I do have a few questions regarding the service and hardware though.
      I run both my BT openreach router and BT Home hub from the same power socket. The problem is, if I turn the plug on so both the Homehub and Openreach Router start up at the same time, the home hub will never get an Internet connection from the router. To solve this I have to turn the BT home hub on first and leave it for a minute, then start the router up and it all works fine. I'm just curious if this is the norm or do I have some faulty hardware?
      Secondly, I appreciate the estimated speed BT quote isn't always accurate, I was quoted 49mbits down but received 38mbits down - Which I was happy with. Recently though it has dropped to 30. I am worried this might continue to drop over time. and as of present I am 20mbits down on the estimate . For the record 30mbits is actually fine and probably more than I would ever need. If I could boost it some how though I would be interested to hear from you.
    Thanks, .

    Just a clarification: the two boxes are the HomeHub (router, black) and the modem (white).  The HomeHub has its own power switch, the modem doesn't.
    There is something wrong if the HomeHub needs to be turned on before the modem.  As others have said, in general best to leave the modem on all the time.  You should be able to connect them up in any order, or together.  (For example, I recently tripped the mains cutout, and when I restored power the modem and HomeHub went on together and everything was ok).
    Check if the router can connect/disconnect from the broadband using the web interface.  Leaving the modem and HomeHub on all the time, go to http://192.168.1.254/ on a browser on a connected computer, and see whether the Connect/Disconnect button works.

Maybe you are looking for

  • ITunes not installing to c-drive in windows

              Hi. my itunes somehow got tied to my back-up drive.  i want to install itunes to my c-drive, but i can't get it to even recognize anything other than the "L-drive," which is my backup drive when it is plugged in.  It works when L-drive is p

  • How to map single to multiple record in Biztalk Map

    Hi, I have a flat file Input as below For understandability I am making it as XML: <Input> <Name>vignesh</Name> <Country>India</Country> <orderNumber>123<orderNumber> </Input> I am having a look up table to retrieve multiple LineItem per Name Output

  • Unable to persist data using oracle BC

    HI all I am using Jdeveloper 11.1.2.3.0. I have followed the Adf insider BC and Faces component tutorials. All works fine and i am able to view all data correctly. When i try to insert a new record, using create and insert or create, i can see the ro

  • Plugin error with Intermedia

    Background: table with 8 text fields, 3 Intermedia fields (1 audio, 1 video, and 1 image) where binary data is stored in the table I have a JSP page that pulls the text data from the database. I have a servlet that pulls the audio and video data from

  • Damaged installs... that open anyway.

    I've just signed up for CC and have downloaded several apps including Indesign, Illlustrator, PS, Acrobat, Bridge. But when I open most of them it says, 'your installation is damaged please reinstall.' Except that a couple of seconds later it opens t