Over 30Mb below estimate

I was wondering if anybody here could help. 
Recently we upgraded to the up to 80 package from the full 40 package. After doing the telephone number check the estimated speed was 76mb, living less than 100 yards from the exchange. We were getting between 36-38 on the 40, so we decided to renew our contract for another 18 months, believing we would at least get over 50.
After the stability period we tested the speed, which did go up from between 36-38 to 41-43 on an IP with a max of 48. As the increase of between 3-7 has been gained. After calling BT, they said that they would not even look into why the speed did not increase further  as  (quoted) you have achieved an increase in speed.
The main hardware used (with all other devices switched off) is an iMac (2011) fully updated connected directly to the gigabit port (have tested on the other ports - same result) on a HH3, with all applications off through various browsers.
It was brought to my attention that the building we are in suffered power outages during the training period (do to communal repairs).
The questions I have are;
1. How can the landline number checker estimate be so far off the actual speed (quoted 76 receive 41-43 - at best 33 below estimate) when the previous landline check worked so well (quoted 38 received 36-38)?
2. Can the stability period be reset (quoted no)?
3. Is there any way to get BT to look into the problem (told no as increase has been achieved)?
Any help would be greatly appreciated.

Hi
Are you sure that the cabinet you can see is the one that is actually serving you?
It may be that due to pair availability, your service comes via a different route from the exchange and thus a different cabinet.
What results do you get when you run the BT speedtest.
http://speedtest.btwholesale.com/
After the initial test, you will need to select the "further diagnostic" test and then wait for the ip profile to show.
The thing to bear in mind is that if you didn't get at least 38.7mbit down on the original Infinity, you probably won't get any more than that on Infinity2. This may be down to your distance to the cabinet, how many cable joint connections between you and the cabinet and also that if there is any aluminium in your service. All these could adversely affect the speeds you are getting.
I speak as someone who has already been through this pain! 
ptan

Similar Messages

  • Bridge CS6 did not download, PS window over extends below Mac icons at bottom

    I subscribed to CS6 and Bridge CS6 is not activated. When I try it takes me back to Bridge CS4. Also, my PS window extends below my laptop icons so that I am not able to see bottom icons from my layers pallet. I've try to get tech support for 3 days. Spending 3 hrs waiting on the phone 1st day, 3.5 hrs on the second(my phone number was taken by contact person but did not received same and today I've waited since 10 am and it's 10:30 pm(via prompter I was told to expect a call back at 11:30 am). I don't understand what is the problem? When I subscribed there was a friendly operator to sign me up without a hassle.

    Screenshots tell a lot. I would guess, though, that it works ias it always has - by default, PS will open in fullscreen and if you don't auto-hide your dock and keep it in teh forerground all the tiem, naturally it will interfere with any app running in fullscreen. For Bridge, you can change the associations manualyl in the Bridge prefs and enabling/ disabling the relevant startup scripts.
    Mylenium

  • Allow content in tabbed spry to hover over content below in container

    The content in my main container under the tabbed spry shifts down when I choose my content in the tabbed spry.  Is there a way for the
    tabbed content to hover over the main container without shifting the content down?

    You may be better off using a tooltip instead. Have a look here http://labs.adobe.com/technologies/spry/samples/tooltip/Tootlip_with_HTML_Panel.html
    Gramps

  • Weblogic Admin server 10.1mp3 migration to 12c over windows

    Hi ,
    The admin server is currently with 10.0mp3 over windows 2008 server .This needs migration to 12c over another windows server
    The admin server is associated with MachineA & there are 3 managed weblogic servers over machineB .Can anyone suggest what are the ways i can migrate
    what are the changes needed to be done over the below files
    commEnv.cmd
    startManagedWebLogic.cmd
    wlsvc.exe -Seeing this file newly over weblogic 12c ?what is the purpose?
    Under D:\bea\wls12\server\bin this directory what are the changed needed
    installSvc.cmd
    installNodeMgrSvc.cmd
    Need to syntax to install weblogic admin & weblogic admin as windows services over windows with customized service name?

    Thanks for the reply, this looks fine (I think)..the below is the output
    Microsoft Windows [Version 6.0.6001]
    Copyright (c) 2006 Microsoft Corporation. All rights reserved.
    C:\Users\Administrator.SEALEDINFO-PROD>cd C:\Oracle\MiddlewareNew\user_projects\
    domains\irm_domain\bin
    C:\Oracle\MiddlewareNew\user_projects\domains\irm_domain\bin>setdomainenv.cmd
    C:\Oracle\MiddlewareNew\user_projects\domains\irm_domain>java -version
    java version "1.6.0_18"
    Java(TM) SE Runtime Environment (build 1.6.0_18-b07)
    Java HotSpot(TM) Client VM (build 16.0-b13, mixed mode)
    C:\Oracle\MiddlewareNew\user_projects\domains\irm_domain>java weblogic.version
    WebLogic Server 10.3.3.0 Fri Apr 9 00:05:28 PDT 2010 1321401
    Use 'weblogic.version -verbose' to get subsystem information
    Use 'weblogic.utils.Versions' to get version information for all modules
    C:\Oracle\MiddlewareNew\user_projects\domains\irm_domain>

  • "Scratch Disk Full" in Photoshop CS3, 30MB Free"

    I've been all over the net and can't find an answer.
    I have only one drive (I will be fixing that problem tomorrow), a 250GB on my PowerMac G5. Disk Info says there are 50 GB Available. Photoshop preferences agree. Still, I cannot crop a lousy 4MB file.
    Viewed and repaired permissions - wasn't much there, actually. Got Onyx, ran everything I could (although I don't really understand all of it), no change.
    I am a professional photographer, many large (over 30MB) files, but I never ran into anything like this. I read that Photoshop runs into fragments and goes no further, even if there is empty space on the disk. At the same time I read that Tiger defragments automatically. I dunno.
    Activity Monitor says I have plenty of memory.
    Am I looking at a total disk formatting adventure?

    Bob,
    Activity Monitor says I have plenty of memory.
    "Memory" refers to RAM, whereas the "scratch disk" is your hard drive. 50 GB available before you start working with Photoshop files may seem a lot, but when you open large files, it is not that much. It may really run full. If the same is the case with a little 4 MB file (without having just worked on a 30 MB file during the same session), this is really weird.
    Check your disk space with Activity Montor and select +Hard Disk load+ (or something like this; I don't work on an English OS). CPU > Memory > Hard Disk Activity > *Hard Disk Load* > Network.
    Have you got an external hard drive with more than 50 GB free that you could use as a scratch disk test drive instead of your internal?
    Peter

  • Labview 8 - Tree drop and and Below Target

    Hi everyone,
    I developed some application with Labview 7.1 and I used the Below Target available as event data in the Drop ? event.
    About tree and drop event in Labview 8,  I already read the very interesting discussion here :http://forums.ni.com/ni/board/message?board.id=170&message.id=158683. 
    However I have some problems because the Below Target event data is not available now (and also the Drop ? event). To get the destination tag, I use the "Point to Row Column", but in the tree, during the drag, there is also a horizontal line that appear when the target cell is not highlighted. In Labview 7.1, it was releted to the Below Target data and It's very useful because you can determine if the drop is over or below the target or also if you have to add a child to the target. In Labview 8 instead, the only information is the target tag, and this information depending to the exact position of the cursor. For example if you drop when the horizontal line appear, no one cell is higlighted, and the target tag can be the over tag or the below tag depending to the exact cordinates, but the information for the user is the same: a horizontal line below a tag.
    Any suggestion about this problem ?
    Thanks and regards.

    Yes. Do the logic in the dragOver and dragDrop handlers.
    Tracy

  • Router internet speed is slow

    Recently had an issue with my router only having 10Mbs speed with my modem being at over 30Mbs (checked by direct connection of modem to pc).  After checking with Linksys support, discovered the problem was in a setting on my router. If you are using Smart Wi-Fi, go to the initial page.  In Media Prioritization, go to Settings.  You will see Downstream and Upstream Boxes.  These can be manually adjusted.  Mine were at 10000 Kbps on the downstream so that was al I was getting.  Set it to whatever you want that is faster (mine is at 100000 Kbps with 3.0 Docsis modem).  My speeds on Comcast are now 35 Mbps download and almost 6 Mbps upload..  Hope this helps anyone.

    Hello Diego,
    If you are having issues with slow performance on your Wi-Fi, take a look at the article below. It will walk you through using Wireless Diagnostics and also if you go further down the article and there is a section for slow performance. 
    Wi-Fi: How to troubleshoot Wi-Fi connectivity
    http://support.apple.com/en-us/HT202222
    Regards,
    -Norm G. 

  • LR 3.3 doesn't allow "Add" for images on memory card?

    Hi there,
    so far I've been using LR 3.2 and as part of my workflow I simply used "Add" for import to index the files on my memory card, inspected them and move unwanted files to trash, etc. before moving the remainder over to my NAS storage for further work. Now in LR 3.3 I can only select "Copy as DNG" and "Copy" as possible operations on my memory card, no "Move", no "Add".
    What's up with that? Any way to fix that or do I really need to downgrade to 3.2 to be able to work again?
    Any help would be really appreciated.

    First of all: Thanks for all your suggestions how I might change my workflow to make it more secure and work with LR even in the future. It's really appreciated. However, and I know I'm repeating myself here, I deliberately bought LR specifically because one of 3 or 4 options that supported my established and perfectly working workflow; if I didn't care I would be using Aperture now since it is not only cheaper but has some quite clever features even LR doesn't have. BTW: If I didn't care about the UI I would be using Bibble which also supports my workflow but is inferior in usage and processing quality IMHO. LR 3.2 has broken my workflow in a very subtle but nasty way and there's just no way I will change my workflow just to keep continue using LR. I want to be absolutely independent of any vendor -- working with catalogs/libraries/databases is okay as long as it's only for temporary memorization of changes.
    degger, I can understand your desire to do a review of the images before having to copy them anywhere, be it a temporary or final location. It could certainly save some time if there are a lot of images that simply don't need to ever make it off the card. Where I get a little confused is that you don't seem to be wanting to do it to save time. You cite bandwidth and storage concerns, and working off a laptop.
    I'm not sure how extra steps and copies would save me any time. Really, it doesn't get any faster than this:
    1) Index/Render preview of all images on card
    2) Inspect those images and remove coasters (note that inspection might require zooming, comparing images for the best shots, applying some temporary modifications like rotation/crop/exposure correction to check whether the RAW base is good enough for later use)
    3) Move the remainder of the images (RAW+JPEG) to their final destination and clear the card
    4) "Add" the images from their final location into LR again and apply the real modifications, exporting the results back to the final location
    Simple, minimizes burden on "final destination", minimizes local disc space use and is safe enough for my use.
    As I said before: memory cards can and will(!) break at any point of their use, during shooting, during the creation of on-site backups, when pulling them on their laptop, etc. pp. The extra risk of my workflow is so close to zero that I really don't care whether a card fails or not. I'd rather use many smaller cards than one big one so I don't lose all of my photos at once and I empty them as often as I can rather than having them sit around forever. Also if you ever hit the delete button on your camera that at least as unsafe as deleting them locally.
    What I do care about is that "final destination" is safe and I mean that in every possible sense of the word.
    I didn't see you mention the quantities of images or data you're coming home with, so I'll take what I think is a pretty safe over-the-top estimate of two full 64gb cards per shoot. You say it's a major concern for you, working on a laptop, that you don't have enough local storage to act as a buffer for the images before you delete unwanted images. I guess I'm confused as to how you wouldn't have 128gb of space to act as temporary storage, even on a laptop. I know you say explicitly in one post that you don't, I just don't see how. A $100 drive upgrade doesn't strike me as a major roadblock in a complex storage strategy for someone with networked RAIDs and offsite backup. I could understand if this were a backlog issue, and you weren't sorting through images right away and there wouldn't be enough room for multiple shoots to sit in temporary storage. But that's clearly not the case since your workflow currently deals with the images before they're even off the cards. Spending a bit of money to get a larger drive might not be ideal, and it would require a bit of a change to your workflow but it seems like an awfully easy solution to this problem.
    Right now I have 340,7 MB space on my harddrive free. That'll be like 10 GB once I cleared all that garbage I had to create to deal with this very LR issue. Every changed the harddrive on MacBook Pro? I don't think so; it's a royal PITA and requires a lot more effort than just walking into a store and buying a new harddrive.
    I'm using brand 8GB cards BTW, many of them. I don't trust bigger cards because if they break I'll lose many more pictures. I usually carry more than 1 camera with me and I normally don't fill up the cards completely because it still is a hobby not a profession and I don't intend to turn it into one either.
    I also don't follow the use of the trash as a safety net. I know it's not part of your workflow, just something that saves your hide from time to time, but it is very much working against the current. Just like I said previously, if you leave your cards untouched during this whole injest/sort/archive process, no matter what you do or happens to your local copies you will have the original copies on the card until you stick it back in the camera and hit format. Deleting a file (moving it to the trash) during the review/import process is introducing needless complexity, whereas just leaving the card alone offers some innate data security.
    Really what happens when moving a file to the trashcan is a rewrite of the FAT table to have the files point to the new position inside a special directory name .Trash or something. I agree each write is dangerous to the safety of the card but it's not like we're talking about intensive writes or non-atomic operations here; the data itself remains untouched but it's directory entry is moved out of the way. As far as I recall I never had to recover any images from the trash but it makes me feel a little safer should I ever select the wrong image before hitting "Delete Photos...". And no, it doesn't introduce any complexity at all. Extra complexity is when LR has to import and render full size previews of images that are known to be rubbish and expect extra slowdown when keeping LR open for a long amount of time and doing many operations; my approach starts always with a fresh catalog and I can easily process my images in small batches to keep memory requirements down.
    Before using LR I had to use a different programm just to preview the JPEG images and delete the RAW+JPEG pairs manually which was more of a PITA so I was glad to figure out that LR 2 had the capability to actually deals with the photos on card.
    I actually don't see a problem with how your managing your file structure. It's nearly identical to how I do mine, but unlike you I have never run into an issue where I need to split up an event/session into multiple folders. Even with multi-camera shoots, a single event directory should be able to (for a Nikon camera, at least) hold at least [N * 9999] photos, where N is the number of bodies. How many cards you use during an event shouldn't factor into folder structure at all, unless you do it on purpose. Making another assumption here, but 10,000 photos in a single even is pretty unusual. If I have my camera configured correctly, even if I used 100 512mb cards for an event, as long as I stay below 10,000 shots per camera body, they can all go into the same folder without any naming conflicts.
    Great, I don't see any problem either. Also all my cameras have different naming schemes so the names don't clash.
    I would think moving the files to the NAS is equally as free, with comparable speeds over gigE. The offsite backup is the only part of this equation that could really become more costly as data levels increase. But, even el cheapo cloud backup services and bottom tier home broadband plans offer unlimited data storage and bandwidth; there's no unit variable cost even if you were to backup an entire shoot and decide a week later to trash every single shot. I know there are situations where those kinds of variable costs do come into play, but even with relatively expensive Amazon S3, the cost of transferring and storing 25% bad shots from even a large shoot isn't what I would consider cost prohibitive for a photographer doing that much shooting with an already expensive storage solution.
    Okay, let me explain. My "bad" shot ratio is more like 80% rather than 25% due to the (sometimes) excessive amount of bursts to catch certain happenings or expressions. "Bad" shots consists of duplicates, shaken shots, undesired motion blur, bad expression/closed eyes, bad framing, bad exposure other bad settings or just my own trigger slowness, etc. pp. .   I have several levels of backup, ranging from online on-site unversioned to online on-site versioned to a second RAID over online off-site versioned to an encrypted hosted storage facility to off-line versioned and encrypted to external harddrives I keep at several locations. The versioned backups, especially of-site are a real problem here, the versioning is happening on a hourly basis so any change within on hour will be recorded and kept for at least half a year including long-deleted files. I do have limited upstream bandwitdh and I pay for both local and offsite storage capacity and for excess upstream traffic to my server.
    Again. Photography is a hobby and I already invest quite a lot in it but there have to be boundaries.
    For the import process it seems like you're using now, and looking to continue to use in the future, I'm wondering what you wouldn't be able to do just in the Finder. You've said the actual Lr database isn't of much value to you, so at the import stage why not just run through the images in Finder, delete whatever you want to right them, and then move on to any Lr specific or NAS activities from there. You'll still have the same use of the Trash, and it's easy enough to use quickview, large icon previews, and a shortcut key to assing a Finder label to bad images. Then just sort by labels and hit delete. I'm not honestly suggesting that as a good option, but it certainly is an option and seems to fill most of your requirements, especially since it doesn't care where the files are.
    That's exactly what I've been doing all along. Two problems: Preview capabilities are really limited and I have to delete the RAW+JPEG pairs manually.
    Then LR came along and now Adobe took the newly won comfort away.
    I find it hard to believe that your simply not having a few extra gigs of HD space on your laptop or using a different backup service would create such a distinct workflow that the (fairly extensible) Lr import/management system becomes useless. But hey, maybe it does.
    Indeed it does.

  • I have a mid-2010 iMac and just purchased a 2TB TC, can't join existing wireless network with AC standard so attached to iMac via ethernet with TC wifi turned off.  How do i access TC now? not showing up in disk utility or on desktop. working fine with TM

    I have a mid-2010 iMac and just purchased a 2TB TC, I just found out that it can't join existing wireless network with new AC standard so attached to iMac via ethernet with TC's wifi turned off.  How do i access TC now? not showing up in disk utility or on desktop. It is working fine with TM.  My cheeper seagate drives etc kept crashing, so i didnt trust cheeper back up options anymore.  Connected those drives to TM via firewire and could see the drives and access them.
    Also, I didn't want to bridge TC with my new fios router that I paid 100 dollars for, to get N speed and also paying 10 dollars more a month for fast speed.  I heard that bridging slows down everything and then there can be port issues with mail etc.  I connect to the internet via airport only and it is pretty fast. Getting over 50mbs downloads and over 30mbs uploads.  Plus everything in my home it connected to my fios router, airport express for music streaming, two apple tvs, vuezone camer system.  I really didn't want to monkey around too much with my system.  But are there other options to connect the new TC.  Can't find info anywhere for this and called apple who gave me the info above.  after hanging up, i see that i cant access my TC and I am wondering if i would have to reset it to turn wifi on again to make changes to the drive, turn off blinking light  or repair it in disk utility if it should become corrupted.
    For other with similar issues i did solve some other problems: when i connected it to my ethernet port on my iMac wifi stopped working.  Found that I had to turn off the ethernet in the system>network screen, but then TM didn't see the TC so i restarted after changes and then it saw it.
    Now a rant.  I can't believe in this wireless age that Apple would make a product that cant join a wireless net work.  The apple rep said i could return it and look for the previous TC that would join an existing wireless network.  Are we going backwards?
    Thanks!
    lennydas

    Ok... it is getting a bit clearer but there are still some questions.
    I connect to the internet via airport only and it is pretty fast.
    I was assuming airport in this statement in your first post meant the TC or the Express.. but I now realise we are still in the mass confusion stage where apple calls everything wireless an airport. So what you mean is the airport internal card of the computer??
    Also, I didn't want to bridge TC with my new fios router that I paid 100 dollars for, to get N speed and also paying 10 dollars more a month for fast speed.  I heard that bridging slows down everything and then there can be port issues with mail etc.
    I think this is mistaken.
    Putting the TC in bridge mode plugged into your FIOS will not slow the network.. nor will it cause mail or port issues.. in bridge the TC is just a fancy WAP and switch plus the network hard drive.
    If the computer is close it will be faster than the FIOS.
    You can run both wireless networks with different names.. so it is clear which is which. But you can also setup roaming so the computers themselves pick which is the best wireless.
    I tried extending the wireless net work and tried joining wireless network, but the TC kept crashing and I had to keep resetting the TC.  the Apple support person said these, extend wireless network and joint wireless network, are no longer a connection option with the new TC because of the new AC protocol.
    Thanks again!
    You cannot extend to a non-apple wireless router.
    You cannot use join a wireless network because when you do the ethernet ports will be cut off.
    But that has not changed.. I don't think Apple support is correct.. there has been no change with the AC model.. it is simply a fact that apple routers do not work in join wireless mode other than as a dumb client. The same applies to AC as to the earlier version.. but I have asked another person to check this.
    Join in the express is the only apple router that still allows an ethernet connection.
    For now you best use of the TC is bridged to the FIOS. Wireless you can sort out between several options.

  • Email Stops Sending - Constant Spinning Black & White Wheel

    I mistakenly attempted to send a large attachment (over 30mb) and since then (10 hours ago) my email account has the spinning black & white wheel spinning. I am able to receive emails, but not send.
    When looking at the Activity Monitor, the Network traffic is high. When I force mail to quit, Network traffic trails off.
    I've deleted the Outbox.mbox as an other post suggested, but this didn't solve the problem.
    The email account causing the issue is a gmail one, setup with IMAP.
    Have checked the Mail/Preferences/Outgoing Mail account, and all looks well.
    Any suggestions?

    You will probably be able to correct this by going to your web interface page for your ISP mail and delete the offending e-mail file from there. I've had this happen a couple of times. And doing this cleared it up. Gmail is a particularly egregious culprit with things like this. Once it gets stuck, you almost always have to go to the web interface to clear up the problem.
    Good Luck.
    Jon

  • Very Common Premiere Error?? "Sorry, a serious error has occurred that..."

    "Sorry, a serious error has occurred that requires Premiere to be shut down. We will attempt to save your current project"
    Yep, that one.  I get it every time I open up Premiere.  The error box reappears after a couple of seconds, then it keeps reappearing and stacking up until I close the program.
    I've done the most recent updates (Vista and Adobe), and spent hours on the net reading and trying different solutions.  To no avail.
    I think this is a pretty common error, no?  But from everything I've read, people get the error using PR mid-program.  For me, I can't even open the program.  After Effects also starts (as a process in Task Manager), but runs in the background.
    I updated MS Office 2007 (big update) and also my NVIDIA drivers (after I started getting this error).
    You can read my recent problem that might help you understand a little bit more about this error on this thread.
    Would upgrading to a Quadro GPU help at all?
    My PC:
    C2D E8400 o'cd @ 3.6GHz
    8GB ram (DDR2 800)
    1TB raid 0 (500GB internal as scratch disk)
    2 512mb 9600GT's in SLi - should I upgrade??  NVIDIA Quadro?
    Vista Ultimate 64-bit

    Thanks, Jim.
    So, (considering my unwillingness to wreck my current setup), would this possibly help my performance:
    OS: (2) 500GB internal SATA's in RAID 0 (as is)
    Scratch Disk: (1) 500GB internal SATA (sort-of as is)
    Media files Disk: (1) 1TB non-'green' internal SATA (new purchase)
    My main question is:  What's the main difference between a Scratch Disk and a Media Files Disk?
    For instance, I have a newly captured clip (let's call it footage.mpeg).  And I have music.mp3 and lotsofpictures.jpg.  So would I store all three of those files on my Media Files Disk, or on the Scratch Disk?  The Scratch Disk was just for rendered files I thought (like when you're working on a project and previewing it?)?  And so is Premiere reading from my Media Files Disk to play footage.mpeg and then using the Scratch Disk to bounce files back and forth?  Sorry if I'm not making ANY sense.
    Is my suggested setup a dumb idea (bad investment)?
    PS - this may be another problem: my Scratch Disk (Hitachi internal 500GB SATA) is an OEM drive shipped as running on SATA I for 'backwards capability' for older mobos.  So, I downloaded the firmware updater from Hitachi and the updater wasn't working (locking up after booting from CD).  I figured it would be fine.  Well, SATA I runs at a much slower speed than SATA II (the new standard).  But I was transferring files to and from the drive and my OS RAID configuration and Vista was reporting over 30MB/s sustained write speed.  Which by the way is not SATA II (also called SATA 3.0Gbit/s).  I think this may be causing some of my performance issues.  What do you think?  Normally, SATA II should run at 300MB/s (so they claim)   SATA 3.0Gbit/s specs

  • HP LaserJet 2100tn, Error message: "out of memory"

    When I'm printing, an error message appears after printing the first page: "out of memory". My printer has 40MB of memory. I tried to change the printer settings for several time (8MB RAM, ..., over 30MB), but no result. The pages I want to print are from FileMakerPro.
    I tried also to make a pdf file and to print, same result: first page o.k. then error message.
    Thanks a lot for your answers.
    Tobi

    Meg: You are an earth angel You have come to my recue before.
    Yikes...I found one sneaky  b&w drawing .tiff that was 12621 x 966.
    Will convert that puppy.
    Thanks

  • Exchange Rate difference recovory from vendor

    Hi Expers,
    We have an issue with our client over the below issue .
    A Canadaina Plant created a PO with Hungary vendor with HUF as currency in the PO , During PO creation there an exchange rate say X1 ( Exchange Rate not fixed ) . Due to some reasons the GR is done lately after four months from the PO date creation and after the invoice is done .Invoice is done after the three months of PO creation date  .and during the GR/Invoice creation the exchange rate is X2 .
    Sytem posted with X2 exchange rate by not considering the X1 rate . Thease delay in making the GR/MIKRO errors causing the client to pay excess amounts to the vendor and he want to get them back now .
    Requested to advice in which process he can get from the vendor and how to see that the GR to get posted with the old x1 rate for all those posted POs .
    Regards,
    Message was edited by, removed the unwanted terms: Jürgen L

    I hope you misunderstood the facts.
    if the exchange rate is not fixed, then it is not taken into account when invoice or GR is posted.
    If the GR was posted after the invoice, then the goods receipt is valuated by the invoice value.
    You did not overpay the vendor, because you  pay the vendor in the currency of the invoice, and this amount is just equal to the amount you ordered, the difference is just somewhere in your own books

  • MOD_JK configurations

    Hi
    We are facing problems with MOD_JK not redirecting to UI servers in case of fail over. We have 3 UI servers configured to MOD_JK. The three UI is pointed to individual MDEX.When one of the UI servers failed, the MOD_JK was not able to redirect the request to other servers. We see the log as "server not responding".
    Is there any configuration that has to specifically enabled for server fail over? Also, does some one share idea on MOD PROXY? How far it will handle duing fail overs?
    Below is the MOD_JK configuration
    worker.server.type-ajp13
    worker.server.lbfactor-1
    worker.server.socket_keepalive-TRUE
    worker.server.connection_pool_timeout-600
    worker.vma-end-lb.type-lb
    worker.vma-end-lb.balance_workers-server1,server2
    worker.vma-end-lb.sticky_session-1
    Tomcat Version-6
    Any suggestions are welcome on how to configure the MOD_JK plugin.
    Thanks
    Pradeep

    Your workers.properties config paste is unusual - it looks like you have a hyphen (-) instead of an equals sign (=) before each property value?
    You might want to add a "redirect" directive to each worker [1]. Since you have enabled sticky-sessions, you may also want to check you have setup jvmRoute correctly [2].
    Best
    Brett
    [1] http://tomcat.apache.org/connectors-doc/generic_howto/loadbalancers.html
    [2] http://tomcat.apache.org/tomcat-6.0-doc/config/engine.html

  • Bash vulnerability in Solaris 10

    http://seclists.org/oss-sec/2014/q3/650
    https://securityblog.redhat.com/2014/09/24/bash-specially-crafted-environment-variables-code-injection-attack/
    Any plans for a hotfix for bash on Solaris 10?
    $env x='() { :;}; echo vulnerable' bash -c "echo this is a test"
    vulnerable
    this is a test
    SunOS hostname 5.10 Generic_150401-13 i86pc i386 i86pc
    $bash -version
    GNU bash, version 3.2.51(1)-release (i386-pc-solaris2.10)
    Copyright (C) 2007 Free Software Foundation, Inc.
    $pkginfo -l SUNWbash
       PKGINST:  SUNWbash
          NAME:  GNU Bourne-Again shell (bash)
      CATEGORY:  system
          ARCH:  i386
       VERSION:  11.10.0,REV=2005.01.08.01.09
       BASEDIR:  /
        VENDOR:  Oracle Corporation
          DESC:  GNU Bourne-Again shell (bash) version 3.2
        PSTAMP:  sfw10-patch-x20120813130538
      INSTDATE:  Aug 19 2014 07:23
       HOTLINE:  Please contact your local service provider
        STATUS:  completely installed
         FILES:        4 installed pathnames
                       2 shared pathnames
                       2 directories
                       1 executables
                    1250 blocks used (approx)

    Hard to say whether it's safer to wait or safer to patch it yourself in the meantime but, if like me you'd rather not wait an indefinite period of time for a patch, here is a patching process that's working for me:
    Found the newest GNU patch compiled for Solaris on Open CSW: bash - Solaris package
    To install, you'll want the CSW package utility. Here are some instructions, but I'll also go over it below: Getting started — OpenCSW 0.2014.04 documentation
    You may already have the CSW package utilities installed, check under "/opt/csw/bin" for "pkgutil". If it's not there, issue
    pkgadd -d http://get.opencsw.org/now
    Then, I like to add a symbolic link into /usr/bin to make it easier:
    sudo ln -s /opt/csw/bin/pkgutil /usr/bin/pkgutil
    Now we can do the install -- pkgutil is going to handle all the heavy lifting, dependency building etc., and place the new bash binary into "/opt/csw/bin"
    sudo pkgutil -U
    sudo pkgutil -a bash
    sudo pkgutil -i bash
    Follow the prompts, and then look under /opt/csw/bin for bash:
    ls /opt/csw/bin | grep bash
    If you see it listed there w/ a Sep 25th date (or later, if you're following these instructions in my future), then you're ready for the final step -- replacing the old bash binary with the new.
    We're going to replace /usr/bin/bash with a link to /opt/csw/bin/bash. I was worried this step would crash running processes and applications (weblogic, BI, db instances), but so far no issues -- that said, PLEASE be careful and shutdown anything you can first! I can't be sure this step will work w/o any hiccups every time.
    cd /usr/bin
    sudo cp bash bash-old
    sudo ln -f /opt/csw/bin/bash /usr/bin/bash
    You can see we backed up the old bash install (4.1), in case something goes wrong. When finished, issue that command and you should see an error message now:
    env x='() { :;}; echo vulnerable' bash -c "echo this is a test"
    bash: warning: x: ignoring function definition attempt
    bash: error importing function definition for `x'
    this is a test
    Again, BE CAREFUL -- while I was figuring this out, I did take down a couple zones to the point where I couldn't SSH back into them.
    That said, the steps above are working flawlessly for me -- BUT I can't guarantee you'll have the same experience!

Maybe you are looking for

  • How do I share files uploaded into the Creative Cloud with other creative cloud members?

    How do I share files uploaded into the Creative Cloud with other creative cloud members?

  • Wireless tethered live photo booth slideshow with Nik filters

    This might get a little crazy, but wanted to get some input from you guys. I am a wedding photographer and one of the options couples can order is a photo booth. I am looking to take it to the next level. Here is my gear and possible software Hardwar

  • APD Query populating Transactional ODS

    Hi ,    1) How do we find out if a Transactional ODS is populated by an APD Query. Where will this information be mentioned?    2) If we dont know the name of the APD Query, but we know the transactional ODS, can we find out the APD query name.    3)

  • Transition to using DLU

    Hello: Currently, our machines all have a local account named "Workstation" that does an "Autologin" after the user authenticates to NDS. We are transitioning to using DLU and have a procedure for creating a new account and doing a CopyTo from the "W

  • ImportData by "onload" event

    I am using javascript to dynamically load XML into a form using: xfa.host.importData("test.xml"); This works fine using a button after the page has loaded, but when I put this code in an Initialize event it does not work. I have also tried call the b