BODS Usage

Hi BODS Guru,
I have urgent requirement client asking what is the best practice for using SAP BODS and LSMW. Which one is the best choose.
Can you pls send me the any ppt or documents, because I need to convince to customers.
Thanks in Advance........Guru.
Regards,
Vijay.

Hi Can Bel/Manoj,
Thanks for ur valuable solutions.
Can you please send me the related document for both BODS and LSMW features.
Thanks in advance.....Guru,
Regards,
Vijay.

Similar Messages

  • Usage of BODS Functions (NVL,DECODE...) are stopping the FULL PUSH-DOWN

    Hi,
    It seems that the usage of BODS Functions like NVL and DECODE are leading to the partial PUSH-DOWN to the database.
    Is there any way to achive to the FULL PUSH DOWN (INSERT INTO DBTABLENAME)?
    Regards,
    Sudhakar

    Hello
    The optimiser determines if the whole dataflow can be pushed-down.  NVL and DECODE can be pushed-down to most databases, so its something to do with how you've built the dataflow.
    Things that stop full push-down
    1 - custom functions
    2 - complex statements
    3 - datatype conversions
    3 - complex dataflows with many transforms
    4 - some specific tranforms (DQ, pivot,etc.)
    With a bit of trial and error, you should be able to figure out which is stopping yours.
    Michael

  • Massive memory hemorrhage; heap size to go from about 64mb, to 1.3gb usage

    **[SOLVED]**
    Note: I posted this on stackoverflow as well, but a solution was not found.
    Here's the problem:
    [1] http://i.stack.imgur.com/sqqtS.png
    As you can see, the memory usage balloons out of control! I've had to add arguments to the JVM to increase the heapsize just to avoid out of memory errors while I figure out what's going on. Not good!
    ##Basic Application Summary (for context)
    This application is (eventually) going to be used for basic on screen CV and template matching type things for automation purposes. I want to achieve as high of a frame rate as possible for watching the screen, and handle all of the processing via a series of separate consumer threads.
    I quickly found out that the stock Robot class is really terrible speed wise, so I opened up the source, took out all of the duplicated effort and wasted overhead, and rebuilt it as my own class called FastRobot.
    ##The Class' Code:
        public class FastRobot {
             private Rectangle screenRect;
             private GraphicsDevice screen;
             private final Toolkit toolkit;
             private final Robot elRoboto;
             private final RobotPeer peer;
             private final Point gdloc;
             private final DirectColorModel screenCapCM;
             private final int[] bandmasks;
             public FastRobot() throws HeadlessException, AWTException {
                  this.screenRect = new Rectangle(Toolkit.getDefaultToolkit().getScreenSize());
                  this.screen = GraphicsEnvironment.getLocalGraphicsEnvironment().getDefaultScreenDevice();
                  toolkit = Toolkit.getDefaultToolkit();
                  elRoboto = new Robot();
                  peer = ((ComponentFactory)toolkit).createRobot(elRoboto, screen);
                  gdloc = screen.getDefaultConfiguration().getBounds().getLocation();
                  this.screenRect.translate(gdloc.x, gdloc.y);
                  screenCapCM = new DirectColorModel(24,
                            /* red mask */    0x00FF0000,
                            /* green mask */  0x0000FF00,
                            /* blue mask */   0x000000FF);
                  bandmasks = new int[3];
                  bandmasks[0] = screenCapCM.getRedMask();
                  bandmasks[1] = screenCapCM.getGreenMask();
                  bandmasks[2] = screenCapCM.getBlueMask();
                  Toolkit.getDefaultToolkit().sync();
             public void autoResetGraphicsEnv() {
                  this.screenRect = new Rectangle(Toolkit.getDefaultToolkit().getScreenSize());
                  this.screen = GraphicsEnvironment.getLocalGraphicsEnvironment().getDefaultScreenDevice();
             public void manuallySetGraphicsEnv(Rectangle screenRect, GraphicsDevice screen) {
                  this.screenRect = screenRect;
                  this.screen = screen;
             public BufferedImage createBufferedScreenCapture(int pixels[]) throws HeadlessException, AWTException {
        //          BufferedImage image;
                DataBufferInt buffer;
                WritableRaster raster;
                  pixels = peer.getRGBPixels(screenRect);
                  buffer = new DataBufferInt(pixels, pixels.length);
                  raster = Raster.createPackedRaster(buffer, screenRect.width, screenRect.height, screenRect.width, bandmasks, null);
                  return new BufferedImage(screenCapCM, raster, false, null);
             public int[] createArrayScreenCapture() throws HeadlessException, AWTException {
                       return peer.getRGBPixels(screenRect);
             public WritableRaster createRasterScreenCapture(int pixels[]) throws HeadlessException, AWTException {
             //     BufferedImage image;
                 DataBufferInt buffer;
                 WritableRaster raster;
                  pixels = peer.getRGBPixels(screenRect);
                  buffer = new DataBufferInt(pixels, pixels.length);
                  raster = Raster.createPackedRaster(buffer, screenRect.width, screenRect.height, screenRect.width, bandmasks, null);
             //     SunWritableRaster.makeTrackable(buffer);
                  return raster;
        }In essence, all I've changed from the original is moving many of the allocations from function bodies, and set them as attributes of the class so they're not called every time. Doing this actually had a significant affect on frame rate. Even on my severely under powered laptop, it went from ~4 fps with the stock Robot class, to ~30fps with my FastRobot class.
    ##First Test:
    When I started outofmemory errors in my main program, I set up this very simple test to keep an eye on the FastRobot. Note: this is the code which produced the heap profile above.
        public class TestFBot {
             public static void main(String[] args) {
                  try {
                       FastRobot fbot = new FastRobot();
                       double startTime = System.currentTimeMillis();
                       for (int i=0; i < 1000; i++)
                            fbot.createArrayScreenCapture();
                       System.out.println("Time taken: " + (System.currentTimeMillis() - startTime)/1000.);
                  } catch (AWTException e) {
                       e.printStackTrace();
        }##Examined:
    It doesn't do this every time, which is really strange (and frustrating!). In fact, it rarely does it at all with the above code. However, the memory issue becomes easily reproducible if I have multiple for loops back to back.
    #Test 2
        public class TestFBot {
             public static void main(String[] args) {
                  try {
                       FastRobot fbot = new FastRobot();
                       double startTime = System.currentTimeMillis();
                       for (int i=0; i < 1000; i++)
                            fbot.createArrayScreenCapture();
                       System.out.println("Time taken: " + (System.currentTimeMillis() - startTime)/1000.);
                       startTime = System.currentTimeMillis();
                       for (int i=0; i < 500; i++)
                            fbot.createArrayScreenCapture();
                       System.out.println("Time taken: " + (System.currentTimeMillis() - startTime)/1000.);
                       startTime = System.currentTimeMillis();
                       for (int i=0; i < 200; i++)
                            fbot.createArrayScreenCapture();
                       System.out.println("Time taken: " + (System.currentTimeMillis() - startTime)/1000.);
                       startTime = System.currentTimeMillis();
                       for (int i=0; i < 1500; i++)
                            fbot.createArrayScreenCapture();
                       System.out.println("Time taken: " + (System.currentTimeMillis() - startTime)/1000.);
                  } catch (AWTException e) {
                       e.printStackTrace();
        }##Examined
    The out of control heap is now reproducible I'd say about 80% of the time. I've looked all though the profiler, and the thing of most note (I think) is that the garbage collector seemingly stops right as the fourth and final loop begins.
    The output form the above code gave the following times:
    Time taken: 24.282 //Loop1
    Time taken: 11.294 //Loop2
    Time taken: 7.1 //Loop3
    Time taken: 70.739 //Loop4
    Now, if you sum the first three loops, it adds up to 42.676, which suspiciously corresponds to the exact time that the garbage collector stops, and the memory spikes.
    [2] http://i.stack.imgur.com/fSTOs.png
    Now, this is my first rodeo with profiling, not to mention the first time I've ever even thought about garbage collection -- it was always something that just kind of worked magically in the background -- so, I'm unsure what, if anything, I've found out.
    ##Additional Profile Information
    [3] http://i.stack.imgur.com/ENocy.png
    Augusto suggested looking at the memory profile. There are 1500+ `int[]` that are listed as "unreachable, but not yet collected." These are surely the `int[]` arrays that the `peer.getRGBPixels()` creates, but for some reason they're not being destroyed. This additional info, unfortunately, only adds to my confusion, as I'm not sure why the GC wouldn't be collecting them
    ##Profile using small heap argument -Xmx256m:
    At irreputable and Hot Licks suggestion I set the max heap size to something significantly smaller. While this does prevent it from making the 1gb jump in memory usage, it still doesn't explain why the program is ballooning to its max heap size upon entering the 4th iteration.
    [4] http://i.stack.imgur.com/bR3NP.png
    As you can see, the exact issue still exists, it's just been made smaller. ;) The issue with this solution is that the program, for some reason, is still eating through all of the memory it can -- there is also a marked change in fps performance from the first the iterations, which consume very little memory, and the final iteration, which consumes as much memory as it can.
    The question remains why is it ballooning at all?
    ##Results after hitting "Force Garbage Collection" button:
    At jtahlborn's suggestion, I hit the Force Garbage Collection button. It worked beautifully. It goes from 1gb of memory usage, down to the basline of 60mb or so.
    [5] http://i.stack.imgur.com/x4282.png
    So, this seems to be the cure. The question now is, how do I pro grammatically force the GC to do this?
    ##Results after adding local Peer to function's scope:
    At David Waters suggestion, I modified the `createArrayCapture()` function so that it holds a local `Peer` object.
    Unfortunately no change in the memory usage pattern.
    [6] http://i.stack.imgur.com/Ky5vb.png
    Still gets huge on the 3rd or 4th iteration.
    #Memory Pool Analysis:
    ###ScreenShots from the different memory pools
    ##All pools:
    [7] http://i.stack.imgur.com/nXXeo.png
    ##Eden Pool:
    [8] http://i.stack.imgur.com/R4ZHG.png
    ##Old Gen:
    [9] http://i.stack.imgur.com/gmfe2.png
    Just about all of the memory usage seems to fall in this pool.
    Note: PS Survivor Space had (apparently) 0 usage
    ##I'm left with several questions:
    (a) does the Garbage Profiler graph mean what I think it means? Or am I confusing correlation with causation? As I said, I'm in an unknown area with these issues.
    (b) If it is the garbage collector... what do I do about it..? Why is it stopping altogether, and then running at a reduced rate for the remainder of the program?
    (c) How do I fix this?
    Does anyone have any idea what's going on here?
    [1]: http://i.stack.imgur.com/sqqtS.png
    [2]: http://i.stack.imgur.com/fSTOs.png
    [3]: http://i.stack.imgur.com/ENocy.png
    [4]: http://i.stack.imgur.com/bR3NP.png
    [5]: http://i.stack.imgur.com/x4282.png
    [6]: http://i.stack.imgur.com/Ky5vb.png
    [7]: http://i.stack.imgur.com/nXXeo.png
    [8]: http://i.stack.imgur.com/R4ZHG.png
    [9]: http://i.stack.imgur.com/gmfe2.png
    Edited by: 991051 on Feb 28, 2013 11:30 AM
    Edited by: 991051 on Feb 28, 2013 11:35 AM
    Edited by: 991051 on Feb 28, 2013 11:36 AM
    Edited by: 991051 on Mar 1, 2013 9:44 AM

    SO came through.
    Turns out this issue was directly related to the garbage collector. The default one, for whatever reason, would get behind on its collection at points, and thus the memory would balloon out of control, which then, once allocated, became the new normal for the GC to operate at.
    Manually setting the GC to ConcurrentMarkSweep solved this issue completely. After numerous tests, I have been unable to reproduce the memory issue. The garbage collector does an excellent job of keeping on top of these minor collections.

  • How to poll Aperture lens data for usage graph - Filters?

    Does anyone know how to set up a Filter in Aperture 3.1.1 so that one can see a list (or make a list) of the most to least common focal length one shoots at with interchangeable lenses?
    I assume it will probably have to be a search for all 20mm, then all 28mm, then all 35mm, then all 50mm, etc. and then look at the totals for each, penciled down, and see what focal lengths are the most and least used.
    I have set up separate camera smart folders so I would need to only conduct this search/filter on the two DSLR bodies I have. Otherwise, a Nikon scanner, Minolta scanner, and a G10 and some other point and shoots will come in to the mix and make it inaccurate.
    My objective is to see which lenses are redundant and which are less used then sell the extras.

    Joseph Coates wrote:
    My objective is to see which lenses are redundant and which are less used then sell the extras.
    Seems you should filter by lens and not by focal length.
    I keep a permanent set of Smart Albums, one for each lens I have. (Create one, then dupe it and change the Lens field and the Album name. Get the lens name from the metadata of a shot taken with the lens.) Periodically I check the totals to see my usage pattern.
    I recommend also setting up a series of Smart Albums for focal length. (As above, create one, then dupe and change the filter field and Album name. Group them in one Folder.) If you use cameras with different size sensors, you might want to set up both an actual focal length set, and a "35mm equivalent focal length" set. Use whatever ranges you find meaningful.
    Note that you can select any number of Folders to open all of them in the Browser and get a total number of images. Note that this number will change as you expand or collapse Stacks, and so the totals given depend somewhat on how you use Stacks. Note, too, that Smart Albums use very little overhead (the only thing stored are the criteria). Don't hesitate to make as many as you find useful.
    Message was edited by: Kirby Krieger, added illustrations.

  • How to configure BODS in network environment with NAT ?

    Hi Team,
    Now we are working on POC of BO Data Services 4.0 with SI partner and they reported us that  a communication error (error code:BODI-1241023) occurred when they started a job from Designer. 
    They can do it without any problems in the following two cases.
    1. from Designer which is installed in the CMS/JobServer machine
    2. from Designer which is installed in local PC within internal network (without firewall / NAT) 
    That is, the cause is Firewall with NAT(Network Address Translation) between Designer and JobServer/CMS.
    And, they can log on to CMS/JobServer with NAT environment, however, cann't start a job from Designer.
    The port #3500 for JobServer is open. They confirmed that they could log on to the JobServer in the event log
    of the JobServer.
    That is,  Designer -> CMS/JobServer communication is OK, but JobServer -> Designer communication must be NG.
    Could you advise us how to configure BODS both client and server sides in the network environment with NAT ?
    Thanks and best regards,

    HI Buddy,
    You can achieve this by $FLEX$, create first value set, and assign it to first field. Create second value set based on first value set using $FLEX$.
    follow steps mentioned in the bellow link
    http://erpschools.com/articles/usage-of-flex

  • MEASUREMENTS IN ACTIVITY USAGE PROFILE

    Please see enclosed link 'printscreen', how is it possible to see, in the Activity Usage Profile, the measurements (hours or manpower,etc.) in the chart blocks,
    specific per month?
    And P6 does not show the actual hours + curve (actual progress till January 2011)
    Thanks in advance for your response, regards,
    WP
    Edited by: WP Hertel on 18-jan-2011 12:09
    Edited by: WP Hertel on 31-jan-2011 15:27

    Modify your timescale to Month.
    1. Right-click on the graph and select the Activity Usage Profile Options.
    2. Click on the Graph Tab
    3. Select the checkbox 'Calculate Average'
    4. Select the 'hours' per month by typing them in (this won't work in P6 7.0 with no service pack). 173 hours is the default and this will convert the Axis to 'bodies'.
    If you want to see exact numbers, double-click in the graph and a popup window will appear.
    Regards,
    Steve Clements

  • Example scenarios for usage of scripting

    Hi,
    sapian
         Can any of you give me some typical examples for usage of scripting in sap BODS except the sql statements.Like i need examples for error handling and for some other scenarios where i can use scripting.
    Please suggest me some scenarios with examples for the same.
    Thanks in advance
    Regards,
    Kishor Kumar s

    Check this -
    How to capture error log in a table in BODS

  • Support for floating bodies

    Just tried to write a paper for a conference. It looks quite
    nice in the end, but it was a bit of a hassle to get there.
    Buzzword is currently nice for documents of a designed nature, but
    not for structured documents, such as scientific papers. However,
    the very nature of Buzzword should fit in with joint authoring and
    reweving of papers and articles amongst scientist.
    My problem was to include illustrations that spans the entire
    row, together with a caption. I solved this by inserting the image
    into a two-row single column table, with the image in the first
    row, and the caption in the second. The problem is, that if the
    image doesn't fit the rest of the paper, it is moved to the top of
    the next, leaving the rest of the previous page blank. A classical
    problem, really. So I had to manually move these tables around in
    the text to avoid half-empty pages.
    I think authors shouldn't bother themselves too much with
    this.
    This could be solved by introducing
    floating bodies, bodies that are anchored to a specific
    point in the text, but will be moved around automatically to
    optimal positions. A floating body consist of a content part, which
    can hold an image, a table, a piece of source code, an equation, or
    some other text, together with an optional caption.

    Hi arenol,
    Thanks for your post, and for your feedback. I myself have
    been using Buzzword for scientific collaboration, and have noticed
    as well, in my usage, that the image floating could be improved on.
    We certainly appreciate users lettings us know how you would
    like to see the services improved, and we certainly take your ideas
    to heart.
    I will pass your thoughts on to the rest of the team as
    something to be considered for future versions.
    Thanks again!
    Regards,
    Pete

  • Problem with Firefox and very heavy memory usage

    For several releases now, Firefox has been particularly heavy on memory usage. With its most recent version, with a single browser instance and only one tab, Firefox consumes more memory that any other application running on my Windows PC. The memory footprint grows significantly as I open additional tabs, getting to almost 1GB when there are 7 or 8 tabs open. This is as true with no extensions or pluggins, and with the usual set, (firebug, fire cookie, fireshot, XMarks). Right now, with 2 tabs, the memory is at 217,128K and climbing, CPU is between 0.2 and 1.5%.
    I have read dozens of threads providing "helpful" suggestions, and tried any that seem reasonable. But like most others who experience Firebug's memory problems, none address the issue.
    Firefox is an excellent tool for web developers, and I rely on it heavily, but have now resorted to using Chrome as the default and only open Firefox when I must, in order to test or debug a page.
    Is there no hope of resolving this problem? So far, from responses to other similar threads, the response has been to deny any responsibility and blame extensions and/or pluggins. This is not helpful and not accurate. Will Firefox accept ownership for this problem and try to address it properly, or must we continue to suffer for your failings?

    55%, it's still 1.6Gb....there shouldn't be a problem scanning something that it says will take up 300Mb, then actually only takes up 70Mb.
    And not wrong, it obviously isn't releasing the memory when other applications need it because it doesn't, I have to close PS before it will release it. Yes, it probably is supposed to release it, but it isn't.
    Thank you for your answer (even if it did appear to me to be a bit rude/shouty, perhaps something more polite than "Wrong!" next time) but I'm sitting at my computer, and I can see what is using how much memory and when, you can't.

  • Problem with scanning and memory usage

    I'm running PS CS3 on Vista Home Premium, 1.86Ghz Intel core 2 processor, and 4GB RAM.
    I realise Vista only sees 3.3GB of this RAM, and I know Vista uses about 1GB all the time.
    Question:
    While running PS, and only PS, with no files open, I have 2GB of RAM, why will PS not let me scan a file that it says will take up 300Mb?
    200Mb is about the limit that it will let me scan, but even then, the actual end product ends up being less than 100Mb. (around 70mb in most cases)I'm using a Dell AIO A920, latest drivers etc, and PS is set to use all avaliable RAM.
    Not only will it not let me scan, once a file I've opened has used up "x" amount of RAM, even if I then close that file, "x" amount of RAM will STILL be unavaliable. This means if I scan something, I have to save it, close PS, then open it again before I can scan anything else.
    Surely this isn't normal. Or am I being stupid and missing something obvious?
    I've also monitored the memory usage during scanning using task manager and various other things, it hardly goes up at all, then shoots up to 70-80% once the 70ishMb file is loaded. Something is up because if that were true, I'd actually only have 1Gb of RAM, and running Vista would be nearly impossible.
    It's not a Vista thing either as I had this problem when I had XP. In fact it was worse then, I could hardly scan anything, had to be very low resolution.
    Thanks in advance for any help

    55%, it's still 1.6Gb....there shouldn't be a problem scanning something that it says will take up 300Mb, then actually only takes up 70Mb.
    And not wrong, it obviously isn't releasing the memory when other applications need it because it doesn't, I have to close PS before it will release it. Yes, it probably is supposed to release it, but it isn't.
    Thank you for your answer (even if it did appear to me to be a bit rude/shouty, perhaps something more polite than "Wrong!" next time) but I'm sitting at my computer, and I can see what is using how much memory and when, you can't.

  • Problem with JTree and memory usage

    I have problem with the JTree when memory usage is over the phisical memory( I have 512MB).
    I use JTree to display very large data about structure organization of big company. It is working fine until memory usage is over the phisical memory - then some of nodes are not visible.
    I hope somebody has an idea about this problem.

    55%, it's still 1.6Gb....there shouldn't be a problem scanning something that it says will take up 300Mb, then actually only takes up 70Mb.
    And not wrong, it obviously isn't releasing the memory when other applications need it because it doesn't, I have to close PS before it will release it. Yes, it probably is supposed to release it, but it isn't.
    Thank you for your answer (even if it did appear to me to be a bit rude/shouty, perhaps something more polite than "Wrong!" next time) but I'm sitting at my computer, and I can see what is using how much memory and when, you can't.

  • What causes cellular data usage to decrease if last reset has not changed? I reset every pay period cuz I do not have unlimited and was showing 125 mb sent/1.2g received. Now showing 98mb sent/980mb received. Last reset still shows same

        I reset my cellular data usage every month at the beginning of my billing period so that I can keep track of how much data I am using thru out the month. I do not have the unlimited data plan and have gone over a few times. It's usually the last couple days of my billing cycle and I am charged $10 for an extra gig which I barely use 10% of before my usage is restarted for new billing cycle. FYI, they do not carry the remainder of the unused gig that you purchase for $10 which I disagree with. But have accepted. Lol  Moving on.
        I have two questions. One possibly being answered by the other. 1. Is cellular data used when I sync to iTunes on my home computer to load songs onto my iPhone from my iTunes library(songs that already exist)? 2. What causes the cellular data usage readings in iPhone settings/general/usage/cellular usage to decrease if my last reset has not changed? It is close to end of my billing cycle and I have been keeping close eye. Earlier today it read around 180mb sent/1.2gb received. Now it reads 90mb sent/980 mb recieved. My last reset date reads the same as before. I don't know if my sync and music management had anything to do with the decrease but i didn't notice the decrease until after I had connected to iTunes and loaded music. I should also mention that a 700mb app automatically loaded itself to my phone during the sync process. Is cellular data used at all during any kind of sync /iPhone/iTunes management? Is the cellular data usage reading under iPhone settings a reliable source to keep track of monthly data usage? Guess turned out to be more than two questions but all related. Thanks in advance for any help you can offer me. It is information I find valuable. Sorry for the book long question.

    Is cellular data used at all during any kind of sync /iPhone/iTunes management? Is the cellular data usage reading under iPhone settings a reliable source to keep track of monthly data usage?
    1) No.
    2) It does provide an estimated usage, but it's not accurate. For accurate determination, you should check the remaining/used MBs from the carrier (most of the carriers provide this service for free).

  • I am having a data excess usage issue with my Jetpack although I did not use the VZW network at all.

    Background: My home does not have access to landline internet and I am relying on a Verizon Jetpack device rather than satellite cable for internet access.  Within my JetPack wireless network I have several devices connected, iPADs, PC’s, printer, etc. (up to 5 devices can be connected with my system).  Recently, I had one device within my immediate wireless network that I needed to transfer data via FTP to another local device within my immediate network.  The receiving device FTP’d data from device (198.168.1.42) to local IP (address 192.168.1.2).  To understand the data path a Trace rout for this transfer is 198.168.1.42 (source) to 198.168.1.1 (Jetpack) to 192.168.1.2 (receiver).  Note that the VZW network ((cloud)/4G network) was NOT used in this transfer nor was any VZW bandwidth used for this transfer (of course less the typical overhead status/maintenance communications that happen regardless). Use of the VZW bandwidth would be something like downloading a movie from Netflix, (transferring data from a remote site, through the VZW network, into the Jetpack and to the target device).  I also understand if ones devices have auto SW updates that would also go against the usage as well.  I get that.  My understanding of my usage billing is based data transfer across the VZW network.
    Now to the problem: To my surprise I was quite substantially charged for the “internal” direct transfer of this data although I didn’t use the VZW network at all.  This does not seem to be right, and doesn’t make much sense to me.  Usage is should be based on the VZW network not a local Wi-Fi.  In this case, Verizon actually gets free money from me without any use of or impact to the VZW network.  Basically this is a VERY expensive rental for the jetpack device, which I bought anyway.  Considering this, I am also charged each time I print locally.  Dive into this further, I am also interested in knowing what is the overhead (non-data) communications between the jetpack router and devices and how much does that add up to over time?
    Once I realized I was getting socked in bandwidth, as a temp solution I found an old Wi-Fi router, created a separate Wi-Fi network off the Jetpack, but the billing damage was already done.  Switching each device back and forth to FTP and print is a hassle and there should be no reason the existing hardware couldn’t handle this and charges aligned with VSW usage. Is purposely intended by Verizon? Is this charging correct? And can I get some help with this?
    Logically, usage should be based on VZW network usage not internal transfers.  All transfers between IP addresses 192.168 are by default internal.  Any data that needs to leave the internal network are translated in to the dynamic IP addresses established by the VZW network.  That should be very easily detected for usage. In the very least, this fact needs to be clearly identified and clarified to users of the Jetpack.  How would one use a local network and not get socked with usage charges?  Can one set up a Wi-Fi network with another router, hardwire directly from the router to the Jetpack so that only data to and from the VZW network is billed? I might be able to figure out how to have the jetpack powered on but disable the VZW connection, but I don’t want to experiment and find out that the internal transfers are being logged and the log sent after the fact anyway once I connect…. A reasonable solution should be that users be able to use the router functions of the Jetpack (since one has to buy the device anyway) and only be billed for VZW usage.
    Your help in this would be greatly appreciated. Thanks

    i had one mac and spilt water on it, the motherboard fried so i had to buy a used one...being in school and all. it is a MC375lla
    Model Name:
    MacBook Pro
      Model Identifier:
    MacBookPro7,1
      Processor Name:
    Intel Core 2 Duo
      Processor Speed:
    2.66 GHz
      Number of Processors:
    1
      Total Number of Cores:
    2
      L2 Cache:
    3 MB
      Memory:
    8 GB
      Bus Speed:
    1.07 GHz
      Boot ROM Version:
    MBP71.0039.B0E
      SMC Version (system):
    1.62f7
      Hardware UUID:
    A802DE22-1E57-5509-93C5-27CEF01377B7
      Sudden Motion Sensor:
      State:
    Enabled
    i do not have a backup of it, so i am thinking about replacing my old hard drive from the water damaged into this one, not even sure if that would work, but it did not seem to be damaged, as i recovered all the files i wanted off of it to put onto this mbp
    the previous owner didnt have it set to boot, they had all their settings left on it and tried to edit all the names on it, had a bunch of server info and printers etc crap on it.  i do not believe he edited the terminal system though--he doesnt seem to terribly bright(if thats possible)
    tbh i hate lion compared to the old one i had, this one has so many more issues-overheating,fan noise, cd dvd noise
    if you need screenshots or data of anything else as away
    [problem is i do not want to start from scratch if there is a chance of fixing it, this one did not come with disks or anything like my first. so i dont even know if i could, and how it sets now i am basically starting from scratch, because now all my apps are reset but working, i am hoping to get my data back somehow though, i lost all of my bookmarks and editing all my apps and setting again would be a pain

  • High memory usage when many tabs are open (and closed)

    When I open Firefox the memory usage is about 70-100 MB RAM. When I'm working with Firefox I often open 15-20 tabs at once, then I close them and open others. Memory usage increaes up to 450 - 500 MB RAM. After closing the tabs the usage usually decreases, but after sometime. It starts decreasing very slow and never comes back to the level from the beginning. After few hours of work Firefox uses about 400 MB RAM even if one tab is open. First I thought it's because of my plugins (Firebug, Speed Dial, Adlock Plus) but I've checked it and it's not. I reinstalled Firefox but the problem occurs as well. I'm not sure if it's normal. Could you help me, please?

    Hi,
    Not an exact answer but please [http://kb.mozillazine.org/Reducing_memory_usage_(Firefox) see this.] The KB article ponders through various general situations which may or may not be applicable to specific instances but nevertheless could be helpful.
    Useful links:
    [https://support.mozilla.com/en-US/kb/Options%20window All about Tools > Options]
    [http://kb.mozillazine.org/About:config Going beyond Tools > Options - about:config]
    [http://kb.mozillazine.org/About:config_entries about:config Entries]
    [https://addons.mozilla.org/en-US/firefox/addon/whats-that-preference/ What's That Preference? add-on] - Quickly decode about:config entries - After installation, go inside about:config, right-click any preference, enable (tick) MozillaZine Results to Panel and again right-click a pref and choose MozillaZine Reference to begin with.
    [https://support.mozilla.com/en-US/kb/Keyboard%20shortcuts Keyboard Shortcuts]
    [http://kb.mozillazine.org/Profile_folder_-_Firefox Firefox Profile Folder & Files]
    [https://support.mozilla.com/en-US/kb/Safe%20Mode Safe Mode]
    [http://kb.mozillazine.org/Problematic_extensions Problematic Extensions/Add-ons]
    [https://support.mozilla.com/en-US/kb/Troubleshooting%20extensions%20and%20themes Troubleshooting Extensions and Themes]

  • Popularity trend/usage report is not working in sp2013. Data was not being processed to EVENT STORE folder which was present under the Analytics_GUID folder.

    Hi
     I am working in a sharepoint migration project. We have migrated one SharePoint project from moss2007 to sp2013. Issue is when we are clicking on Popularity trend > usage report,  it is throwing an error.
    Issue: The data was not being processed to EVENT STORE folder which was present under the
    Analytics_GUID folder. Also data was not present in the Analytical Store database.
    In log viewer I have found the bellow error.
    HIGH -
    SearchServiceApplicationProxy::GetAnalyticsEventTypeDefinitions--Error occured: System.ServiceModel.Security.MessageSecurityException: An unsecured or incorrectly
    secured fault was received from the other party.
    UNEXPECTED - System.ServiceModel.FaultException`1[[System.ServiceModel.ExceptionDetail,
    System.ServiceModel, Version=4.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089]]: We're sorry, we weren't able to complete the operation, please try again in a few minutes.
    HIGH - Getting Error Message for Exception System.Web.HttpUnhandledException
    (0x80004005): Exception of type 'System.Web.HttpUnhandledException' was thrown. ---> System.ServiceModel.Security.MessageSecurityException: An unsecured or incorrectly secured fault was received from the other party.
    CRITICAL - A failure was reported when trying to invoke a service application:
    EndpointFailure Process Name: w3wp Process ID: 13960 AppDomain Name: /LM/W3SVC/767692721/ROOT-1-130480636828071139 AppDomain ID: 2 Service Application Uri: urn:schemas-microsoft-
    UNEXPECTED - Could not retrieve analytics event definitions for
    https://XXX System.ServiceModel.FaultException`1[System.ServiceModel.ExceptionDetail]: We're sorry, we weren't able to complete the operation, please try again in a few minutes.
    UNEXPECTED - System.ServiceModel.FaultException`1[[System.ServiceModel.ExceptionDetail,
    System.ServiceModel, Version=4.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089]]: We're sorry, we weren't able to complete the operation, please try again in a few minutes.
    I have verified few things in server which are mentioned below
    Two timer jobs (Microsoft SharePoint Foundation Usage Data Processing, Microsoft SharePoint Foundation Usage Data Import) are running fine.
    APPFabric Caching service has been started.
    Analytics_GUID folder has been
    shared with
    WSS_ADMIN_WPG and WSS_WPG and Read/Write access was granted
    .usage files are getting created and also the temporary(.tmp) file has been created.
    uasage  logging database for uasage data being transported. The data is available.
    Please provide pointers on what needs to be done.

    Hi Nabhendu,
    According to your description, my understanding is that you could not use popularity trend after you migrated SharePoint 2007 to SharePoint 2013.
    In SharePoint 2013, the analytics functionality is a part of the search component. There is an article for troubleshooting SharePoint 2013 Web Analytics, please take a look at:
    Troubleshooting SharePoint 2013 Web Analytics
    http://blog.fpweb.net/troubleshooting-sharepoint-2013-web-analytics/#.U8NyA_kabp4
    I hope this helps.
    Thanks,
    Wendy
    Wendy Li
    TechNet Community Support

Maybe you are looking for

  • How to get notes from icloud to new iphone.

    All my old iphone 4S 'notes' are stored in icloud. When I synced up my new iphone 5 it did not transfer the last ten months of notes onto the phone, just everything prior to that. So I'm nervous if I back up the notes I've made since, it may overwrit

  • New 80gb ipod classic. Can't get itunes to sync music at all. HELP!

    I have a brand new ipod classic. It is fully charged and is connected to the computer. When the screen to register my ipod came up i clicked register later. Then the ipod software license agreement came up, so i read it, checked the box and clicked s

  • Trouble about inherited property icon

    Hi, I think the answer to this question is wrong: http://www.shiaupload.ir/images/05580302874684517163.jpg (here is the image sample) But the answer shouldn't have been C? Because an inherited property that has been changed is displayed with a red cr

  • Solaris 11.1 pkg output

    Hi All, On Solaris 11.1, if the publisher is to pkg.oracle.com/solaris/release, I see the following output: root@sun1:~# pkg list -afv entire FMRI IFO pkg://solaris/[email protected],5.11-0.175.1.0.0.24.2:20120919T190135Z i-- pkg://solaris/[email pro

  • Stored procedure to Insert any type of file in to sql server from the path ?

    Hi, I have a table "FileMGT" and storing files in to disks.  FileMGT Columns : FileName FilePath  020324.doc f:\cfs\Claims Document  5013.tif f:\Addendums  2790.msg t:\Treatycfs Now I would like to store all files from above table in to sql sever by