Trying to understand processes

I support an application that is suddenly kicking off 2 processes that are eating most of the CPU on the server (the last time I was told, the software on the server is Solaris 8 and iPlanet Enterprise Server 6.0).
The processes are:
wpswrapupd
wpswraprnx
wpswraprnx
(yes there are 2 proccesses with the same name).
I don't know what these processes are for (and nothing really comes up on any search engine for them), but I don't know why these processes are suddenly going and using so much CPU (this application has been in existence for years with a solid architecture).
Any ideas?

Hello.
You are right - whenever A and B learns about "M" from Domain 2, they craft LSA for domain 1 and inject it simultaneously. They learn each other's LSAs simultaneously and withdraw (set timer to 3600) for previous LSAs. And it might flap infinitely.
If they don't learn LSA simultaneously (let's say that A is much faster then B), then there will be no flaps, but B would learn all Domain 2 routes (not just redistributed) via Domain 1.
And later you will observe routing loop (when you stop advertising M from D): A knows "M" from Domain 2 and injects into Domain 1, B knows from A via Domain 1 and injects into Domain 2... so "M" stays in the routing tables due to mutual redistribution.
You don't have similar (flap) issue with network "N", because admin distance is lower for Domain 1, so both routers would never prefer OSPF via Domain 2! But having no issue with route flaps, you still will observe routing loop if you stop advertising "N" from C.

Similar Messages

  • Trying to understand problems that occur when redistributing between two OSPF processes

    Hi all, I'm currently brushing up on my OSPF and trying to understand the problems that can occur when redistributing between two OSPF processes. I have read and understand (I think!) the issues caused by the fact that the same route submitted by two different OSPF processes may not necessarily follow the OSPF rules that one would expect - for example, OSPF preferring intra-area routes to inter-area routes to external routes, but only within the same process. So, if the same route is submitted from two different processes, that rule goes out the window.
    But I'm having some difficulty getting my head around the idea of setting the administrative distance lower in one OSPF process to prefer one domain over the other. I just can't quite follow the example described in this document:
    http://www.cisco.com/c/en/us/support/docs/ip/open-shortest-path-first-ospf/4170-ospfprocesses.html#twored
    Specifically, in figure 4 where two external networks - external network "N" originating in OSPF domain 1, and external network "M" originating in OSPF domain 2 - are redistributed via two ASBRs. The explanation states:
    This sequence of events could occur occur:
    Router A (Router B) redistributes M into Domain 1, and external M will reach Router B (Router A).
    Because the administrative distance of Domain 1 is lower than Domain 2, Router A (Router B) will install M through Domain 1 and will set to maxage its previous originated LSA (event 1) into Domain 1.
    Because M has been set to maxage in Domain 2, Router A (Router B) will install M though Domain 2 and, therefore, will redistribute M into Domain 2.
    Same as event 1.
    I can't quite work my way through this. I guess it must have something to do with the redistribution of "M" from domain 2 into domain 1 being learned by both ASBRs due to the lower administrative distance assigned to external routes in domain 1, and the original routes through domain 2 being deleted, but then I can't follow the rest of the description. And I can't understand why this would be a problem for network "M" in OSPF domain 2, but NOT for network "N" in OSPF domain 1.
    Any explanation gratefully received!
    Thanks, Graham

    Hello.
    You are right - whenever A and B learns about "M" from Domain 2, they craft LSA for domain 1 and inject it simultaneously. They learn each other's LSAs simultaneously and withdraw (set timer to 3600) for previous LSAs. And it might flap infinitely.
    If they don't learn LSA simultaneously (let's say that A is much faster then B), then there will be no flaps, but B would learn all Domain 2 routes (not just redistributed) via Domain 1.
    And later you will observe routing loop (when you stop advertising M from D): A knows "M" from Domain 2 and injects into Domain 1, B knows from A via Domain 1 and injects into Domain 2... so "M" stays in the routing tables due to mutual redistribution.
    You don't have similar (flap) issue with network "N", because admin distance is lower for Domain 1, so both routers would never prefer OSPF via Domain 2! But having no issue with route flaps, you still will observe routing loop if you stop advertising "N" from C.

  • Trying to understand, being prompted the file compression rules on saving, or not

    Hello,
    I'm trying to understand something, could I ask for your help, please ?
    After working on a jpg file, when I want to save it, still as jpg, with my Photoshop CS5,
    - sometimes photoshop will just save the picture, and it's done
    - sometimes photoshop will show me the compression dialog, "JPEG Options", in which I can choose the compression ratio, the format options (baseline, baseline optimized, progressive), and have an estimation of the total file size
    While not being prompted any dialog is simpler, and I'll then simply assume Photoshop decides to retain the current image's compression and format rules, I must say I like to be in control, and I'd like to know under what form the file is being saved without having to resort to the much more complex "Save For Web" menu.
    Please, would you know WHAT "triggers" the appearance of the JPEG Options when we close/save a jpeg file, in photoshop ? What makes this menu not to appear, what makes it appear ?
    If there are trivial file operations/changes/filters that necessarily trigger its appearance when we want to save, something like that ? I've tried a variety of these, but I still can't figure it out, sometimes it shows in the end, and sometimes it doesn't.
    Thank you very much if you can help me
    Kind regards,
    Oliver

    @ c.pfaffenbichler
    These are images from various sources, not just one.
    I'm deliberately excluding Save For Web, this completely re-processes everything.
    My purpose, precisely, is to know when photoshop takes the decision to retain the image's "rules", and when photoshop decides to pose us the question, how do we want it saved.
    Simply taking a jpeg image, doing stuff on it, and hitting control-w to close the window, and seeing if it will be an
    - «OK, sure, do you want to save ? You clicked OK to confirm you wanted the changes saved ? Good, now it's closed» or a
    - «please sir, how would you like your image saved, tell me the compression ratio and the format options, thank you»

  • Trying to understand what are the parameter/output From debug SNMP timers

    Hi All
    I am trying to understand what are the parameter/output
    From the debug SNMP timers
    Output SNMP timers :
    *Dec 31 11:56:27: SNMP: HC Timer 632DDE28 fired
    *Dec 31 11:56:27: SNMP: HC Timer 632DDE28 rearmed, delay = 5000
    *Dec 31 11:56:32: SNMP: HC Timer 632DDE28 fired
    *Dec 31 11:56:32: SNMP: HC Timer 632DDE28 rearmed, delay = 5000
    *Dec 31 11:56:37: SNMP: HC Timer 632DDE28 fired
    *Dec 31 11:56:37: SNMP: HC Timer 632DDE28 rearmed, delay = 5000
    *Dec 31 11:56:38: SNMP: HC Timer 70B54A70 fired
    *Dec 31 11:56:38: SNMP: HC Timer 70B54A70 rearmed, delay = 20000
    70B54A70 , 632DDE28 „² what this number means.
    5000 , 20000 „² why I have different delay time ( does it means that I have delay for SNMP request )

    The debug messages you are seeing are related to High-Capacity (HC) timers, which manage updates to 64-bit (HC) snmp counters as defined in RFC2233.
    The "fired" and "rearmed" messages indicate when each of these timers
    updates ("fired") the HC snmp counters, and when they should fire next
    (the "rearmed" messages). Higher speed interfaces require updates more often than lower speed interfaces, so you see two examples of that in your debug messages - 5000 ms updates vs 20000 ms update.
    The numbers that are in the messages (i.e. 632DDE28 ) are internal references to the timer that has fired.
    These messages do not indicate any delay in SNMP message processing, but normal SNMP operation of HC counters.

  • Trying to understand traffic Flow in a LWAPP wireless configuration.

    I'm trying to understand at a high level how wireless traffic flow in the new LWAPP configuration. Based on what I can tell all wireless traffic must flow through the controllers prior to getting onto the LAN.
    So lets say I have a LWAPP Access Point off an access switch in a remote closet and my controller is off my core switches. I want to communicate from my wireless PC to a wired PC on this same access switch. The traffic flows from the AP down to the core switch, through the Controller and back up to the access switch to the wired PC.
    Is that correct?
    If this is true my main concern is supporting APs from a central controller across a low speed WAN. Looks like I would not want to do that...

    You're right in your assumption. Data traffic travels from the client to the AP. The AP then encapsulates this data using LWAPP and forwards it to the Controller. The WLC then de-encapsulates (?) it, processes the traffic as necessary and then drops it onto the wired LAN.
    So, in your scenario, the wireless client would send data to the AP. This would be encapsulated between the AP and the controller and then sent back again unencapsulated to the wired client.
    Regarding using this system over a low speed WAN, there are two ways of doing this.
    The first is to use a local WLC at the remote site (e.g. a WLC2006 or the new WLC network module for 2800/3800 ISR routers).
    The second is to use AP1030s which are 'Remote Edge Access Points'. These aren't quite as lightweight as the rest of the 1000 Series in that they will bridge local traffic and only encapsulate traffic heading 'off site'. They will also continue to operate if connection back to the WLC is lost (the first WLAN configured on the WLC remains up on the REAP whilst connection to the WLC is lost).
    I believe that the recommendation for these is a minimum of 2Mbps WAN connection.

  • Trying to understand message redelivery

    Hi,
    I'm trying to understand redelivery.
    I'm using JMS to sync user data from our J2EE system with our Lotus Domino system. I want to be sure that in the advent of any failure in this the sync-ing process, a notification is sent to ensure that at least the Notes Domino stuff can be sync-ed manually by an administrator.
    So my question is how to be sure to capture and send notification of EVERY failure with the information needed to manually sync if needed?
    As I understand it, it's my responsiblity to ensure that I handle all Exceptions in the onMessage() method of our MessageListener implementation - without throwing any new Exceptions. I do this so I'm confident that things like a Notes Session not being instantiated, or a Notes document not being found are handled and notifcation sent. The result is that if an Exception occurs at this level, I handle it and as far as JMS is concerned the message was consumed and got acknowledged. Right?
    But what about messages that fail before the MessageListener is reached - presumeably a problem with the JMS provider (we're using WebLogic 6.1). Can I just rely on JMS to redeliver failed messages and not worry about them? Is there a recommended way to test such failure so I can be sure what happens? I tried throwing a RuntimeException in the onMessage() method, but I did not observe any attempts to resend the message and when I browsed the queue it was empty.
    So as you can you see, I'm unclear about this! Appreciate and clarifying points!
    Terrence

    You can rely on the JMS provider to take care of it's own failures. However, please note that the JMS provider can fail right after you have successfully done your work in onMessage - but before the message acknowledgement happens. This means that in case of failure, you may get the last consumed message again.
    Also note that different JMS providers will handle RuntimeException's raised from onMessage differently, so check your vendor's documentation. After a couple of re-tries, your JMS provider might for instance deem that the receiver/subscriber has a problem consuming said message and move the message into an error queue.
    - Bjarne.

  • Trying to Understand Color Management

    The title should have read, "Trying to Understand Color Management: ProPhoto RGB vs, Adobe RGB (1998) my monitor, a printer and everything in between." Actually I could not come up with a title short enough to describe my question and even this one is not too good. Here goes: The more I read about Color Management the more I understand but also the more I get confused so I thouht the best way for me to understnand is perhaps for me to ask the question my way for my situation.
    I do not own an expensve monitor, I'd say middle of the road. It is not calibrated by hardware or any sophisticated method. I use a simple software and that's it. As to my printer it isn't even a proper Photo filter. My editing of photos is mainly for myself--people either view my photos on the net or on my monitor. At times I print photos on my printer and at times I print them at a Print Shop. My philosophy is this. I am aware that what I see on my monitor may not look the same on someone else's monitor, and though I would definitely like if it it were possible, it doesn't bother me that much. What I do care about is for my photos to come close enough to what I want them to be on print. In other words when the time comes for me to get the best colors possible from a print. Note here that I am not even that concerned with color accuracy (My monitor colors equalling print colors since I know I would need a much better monitor and a calibrated one to do so--accurately compare) but more rather concerned with color detail. What concerns me, is come that day when I do need to make a good print (or afford a good monitor/printer) then I have as much to work with as possible. This leads me to think that therefore working with ProPhoto RGB is the best method to work with and then scale down according to needs (scale down for web viewing for example). So I thought was the solution, but elsewhere I read that using ProPhoto RGB with a non-pro monitor like mine may actually works against me, hence me getting confused, not understanding why this would be so and me coming here. My goal, my objective is this: Should I one day want to print large images to present to a gallery or create a book of my own then I want my photos at that point in time to be the best they can be--the present doesn't worry me much .Do I make any sense?
    BTW if it matters any I have CS6.

    To all of you thanks.                              First off yes, I now have begun shooting in RAW. As to my future being secure because of me doing so let me just say that once I work on a photo I don't like the idea of going back to the original since hours may have been spent working on it and once having done so the original raw is deleted--a tiff or psd remains. As to, "You 're using way too much club for your hole right now."  I loved reading this sentence :-) You wanna elaborate? As to the rest, monitor/printer. Here's the story: I move aroud alot, and I mean a lot in other words I may be here for 6 months and then move and 6 months later move again. What this means is that a printer does not follow me, at times even my monitor will not follow me so no printer calbration is ever taken into consideration but yes I have used software monitor calibration. Having said this I must admit that time and again I have not seen any really noticeale difference (yes i have but only ever so slight) after calibrating a monitor (As mentioned my monitors, because of my moving are usually middle of the road and limited one thing I know is that 32bits per pixel is a good thing).  As to, "At this point ....you.....really don't understand what you are doing." You are correct--absolutely-- that is why I mentioned me doing a lot of reading etc. etc. Thanks for you link btw.
    Among the things I am reading are, "Color Confidence  Digital Photogs Guide to Color Management", "Color Management for Photographers -Hands on Techniques for Photoshop Users", "Mastering Digital Printing - Digital Process and Print Series" and "Real World Color Management - Industrial Strength Production Techniques" And just to show you how deep my ignorance still is, What did you mean by 'non-profiled display' or better still how does one profile a display?

  • Hello, World - trying to understand the steps

    Hello, Experts!
    I am pretty new to Flash Development, so I am trying to understand how to implement the following steps using Flash environment
    http://pdfdevjunkie.host.adobe.com/00_helloWorld.shtml         
    Step 1: Create the top level object. Use a "Module" rather than an "Application" and implement the "acrobat.collection.INavigator" interface. The INavigator interface enables the initial hand shake with the Acrobat ActionScript API. Its only member is the set host function, which your application implements. During the initialize cycle, the Acrobat ActionScript API invokes your set host function. Your set host function then initializes itself, can add event listeners, and performs other setup tasks.Your code might look something like this.
    <mx:Module xmlns:mx="http://www.adobe.com/2006/mxml" implements="acrobat.collection.INavigator" height="100%" width="100%" horizontalScrollPolicy="off" verticalScrollPolicy="off" >
    Step 2: Create your user interface elements. In this example, I'm using a "DataGrid" which is overkill for a simple list but I'm going to expand on this example in the future. Also notice that I'm using "fileName" in the dataField. The "fileName" is a property of an item in the PDF Portfolio "items" collection. Later when we set the dataProvider of the DataGrid, the grid will fill with the fileNames of the files in the Portfolio.
    <mx:DataGrid id="itemList" initialize="onInitialize()" width="350" rowCount="12"> <mx:columns> <mx:DataGridColumn dataField="fileName" headerText="Name"/> </mx:columns> </mx:DataGrid>
    Step 3: Respond to the "initialize" event during the creation of your interface components. This is important because there is no coordination of the Flash Player's initialization of your UI components and the set host(), so these two important milestone events in your Navigator's startup phase could occur in either order. The gist of a good way to handler this race condition is to have both your INavigator.set host() implementation and your initialize() or creationComplete() handler both funnel into a common function that starts interacting with the collection only after you have a non-null host and an initialized UI. You'll see in the code samples below and in step 4, both events funnel into the "startEverything()" function. I'll duscuss that function in the 5th step.
                   private function onInitialize():void { _listInitialized = true; startEverything(); }

    Hello, Experts!
    I am pretty new to Flash Development, so I am trying to understand how to implement the following steps using Flash environment
    http://pdfdevjunkie.host.adobe.com/00_helloWorld.shtml         
    Step 1: Create the top level object. Use a "Module" rather than an "Application" and implement the "acrobat.collection.INavigator" interface. The INavigator interface enables the initial hand shake with the Acrobat ActionScript API. Its only member is the set host function, which your application implements. During the initialize cycle, the Acrobat ActionScript API invokes your set host function. Your set host function then initializes itself, can add event listeners, and performs other setup tasks.Your code might look something like this.
    <mx:Module xmlns:mx="http://www.adobe.com/2006/mxml" implements="acrobat.collection.INavigator" height="100%" width="100%" horizontalScrollPolicy="off" verticalScrollPolicy="off" >
    Step 2: Create your user interface elements. In this example, I'm using a "DataGrid" which is overkill for a simple list but I'm going to expand on this example in the future. Also notice that I'm using "fileName" in the dataField. The "fileName" is a property of an item in the PDF Portfolio "items" collection. Later when we set the dataProvider of the DataGrid, the grid will fill with the fileNames of the files in the Portfolio.
    <mx:DataGrid id="itemList" initialize="onInitialize()" width="350" rowCount="12"> <mx:columns> <mx:DataGridColumn dataField="fileName" headerText="Name"/> </mx:columns> </mx:DataGrid>
    Step 3: Respond to the "initialize" event during the creation of your interface components. This is important because there is no coordination of the Flash Player's initialization of your UI components and the set host(), so these two important milestone events in your Navigator's startup phase could occur in either order. The gist of a good way to handler this race condition is to have both your INavigator.set host() implementation and your initialize() or creationComplete() handler both funnel into a common function that starts interacting with the collection only after you have a non-null host and an initialized UI. You'll see in the code samples below and in step 4, both events funnel into the "startEverything()" function. I'll duscuss that function in the 5th step.
                   private function onInitialize():void { _listInitialized = true; startEverything(); }

  • Trying to understand Android OS updates delays

    This is not another hate mail, it´s more about trying to understand the facts and motives regarding the OS updates (or lack of).
    Hopefully this material will arrive at someone from Sony with enough power to do something about it.
    Ever since I can remember, Sony has been THE brand for electronics. I can´t remember a TV or VCR in my house that it wasn´t Sony, and they lasted for LOTS of years.
    When a couple years ago I finally had the money (and the need) for a Smartphone, i chose the X10 Mini, which is a great little phone from a great brand, but it´s stuck at Android 2.1... which makes it a crippled android nowadays...
    It really bothered me the lack of OS updates, so I chose to buy a Galaxy Nexus, but it´s really expensive in my country. So my second option was a Galaxy Ace 2, but they haven´t arrived to my country yet, so I went with my third option: Xperia Sola, knowing beforehand that it´s a great phone, a great brand, but I might get slow OS updates.
    I bought it about 10 days ago when I saw they were starting to roll out the updates.
    10 days later I still don´t have my update. And not only that, but I don´t see much light at the end of the tunnel...
    I found a thread with the SI numbers that were updated, and there were a bunch on Oct 1, another bunch on Oct 4, and one code on Oct 8, and no other update since...
    I also read that those who did get the update, were having bugs with the OS, and also found threads from other Xperia models, whose updates began rolling 3 months ago, and there is still people who hasn´t gotten the update...
    As a customer, and a owner/CEO of a small company, I have a really hard time understanding how a HUGE company like Sony can be making such mistakes...
    I have been thinking objective reasons, and I can only think of one. I know it´s a wild guess, but I´m starting to think that our salvation might be the very thing that means our condemnation: CYANOGENMOD!!!
    Think about it: Why would Sony spend more money hiring twice as much programmers, when they can make only one update per phone, and sit down and see how CM releases begin to appear, for all tastes and needs. And... IT´s FREE!!!
    Also, if there is a software related problem (way more likely than a hardware problem), then the CM developers take the fall, instead of Sony. And I´m beginning to see custom OS installers that are more user friendly, so it might be something that they take in accout when neglecting OS updates.
    If that´s the line Sony is following, it´s a very risky move and it won´t work. Sony Mobile will crash and burn, but it´s still a better business plan that "let´s get lazy and make ULTRA slow updates so we don´t spend a lot of money programming".
    If you can´t afford more programmers, stop including so much bells and whistles and make your OS near vanilla. Include a couple of Xperia menus, a custom theme and voila!
    The main reason I wanted the Galaxy Nexus is Vanilla OS, which means inmediate OS updates. Sony on the other hand, takes a year or two to release it after its launch. If they release it at all...
    Another though...  why not stop making that many different phones! Really! There are like 5 Xperia models i can´t tell one from the other... even with specs side by side!
    You are trying to make too many phones and you are failing with all of them! (Software and software updates are also part of the phone, one of the most importants...)
    I know hiring programmers is expensive, but you are sacrificing one of the most expensive values for a company EVER: Customer credibility! Which as you know better than me, takes years to create.
    If Apple had problems with carriers and code aprooving and stuff, they might get away with it, becouse they alone have all the devices. If iOS 6 is delayed a few months, it´s delayed for everyone, and besides Apple fanboys rarely complain about Mac products, but Android is a more independent and educated market.
    I´m not saying that Apple users are ignorant, not at all, but I´m pretty sure most iPhone owners don´t even know what processor or how much RAM their phone has. They just "swallow" the Apple Way of Life. (it´s a good phone becouse Apple says so).
    The Android user on the other side, because of the fragmentation of the market, has many brands and models to choose from. An Android user about to buy a new phone will most likely go online looking for different models, specs, reviews in webs and forums...etc.
    You can´t say "I don´t know how HTC and other companies get their updates so soon, but we take a lot more becouse Google and the Operators must aproove the code.", because there are many other brands that have exactly the same difficulties or more, since they are smaller, and we can SEE online that they are indeed delivering solid and relativly fast updates.
    Did we miss something? Does HTC use Witchcraft to get their code aprooved?
    My underlying point is this: You are getting lazy... VERY lazy with software programming for your phones, and WE KNOW IT!
    It´s not the "difficulties" you claim, becouse every brand has those difficulties.
    This isn´t 1999 you know. We are in the information age. If you lie to us and tell us that your phones have the latest OS, I can go online and see that you don´t (Hello!!!). If I see that the company lies to it´s customers, I will stop buying their products. If I´m so dissapointed with how Sony handles OS updates and their customers queries about it, then I want for the first time ever to sell my Cell Phone becouse I´m not happy with it, or the brand behind it.
    We also live in the "Here and now" age. You can´t expect your customers to read about new Android releases on news and blogs, and wait YEARS with arms crossed waiting for their update... The world doesn´t work like that. Not anymore at least...
    It´s not a matter of how many recources you have, it´s about how you use and balance them. GIVE MORE IMPORTANCE TO SOFTWARE UPDATES! IT´S WAY MORE IMPORTANT THAT YOU THINK! LISTEN TO YOUR CUSTOMERS!!!
    You guys are Sony are smart and design great products, but you are not GOD! You are not our wife, no one has sworn alliagence to you.
    If you stop giving us good products and start lying to us, we hate you and stop giving you our money. Simple as that.
    My sola is beatiful, I love the design, the screen, the hardware... but it hasn´t been updated yet to 4.0, not to mention 4.1, which REALLY is the latest version... so stop advertising that your phones have the latest Android OS, unless you want angry customers switching to other companies, which you are getting.
    I also read some stories that Androind 4.0 was announced for the Xperia PLAY, and then it was called off... Do you have any idea how pissed I would be and ripped off I would be if I bought my phone based in that information and then you say it won´t be available?
    Well, actually right now my possition is worse, since you SAY you will update my phone, but you don´t say when, and I read online about people still not getting their updates months after the rollout started, so I´m in the limbo right now...
    As a company, one of the worse things you can do is calling your customers stupid, and when I see the answers you give in this forum, then I feel insulted. I feel like they are talking to an idiot or ignorant person with the so called "diplomatic" answers, which are basicly empty excuses for not doing your job right.
    You gave us the frikking CDs, DVDs and Blu-Rays!!! Don´t tell us you can´t tweak an already built OS in a year!
    I really hope you change their OS update policies really soon, before you lose the already small cellphone market share you have, or at least change your P.R. and C.M. policies towards a more open one.
    We all are humans and make mistakes, but we customers really appreciate honesty and truth.
    Have an open conversation with your customers! Don´t lie about your shortcomings! Accept then and ask the community to help you solve them, ask them what they biggest problems are, what features are most important to them, how often do they expect updates... LISTEN TO THEM!!
    "Succes is a meneace. It tricks smart people into thinking they can´t lose."
    Ps: Nothing personal with the mods from this forum, I´m not killing the messenger, I know that you can ONLY give the info you are allowed to give, and even if you wanted, you probably don´t know the answers yourself, since you work in the Communications department, not Developement or anything technical, and if you can't give any given info, then they probably won´t give it to you either... My message is to the company as a whole. I just hope you will be a good messenger and give this to whoever needs to read it.

    My bad, it´s closer to 40 the number of phones released hehe
    I know it´s all about money, and I know Sony is obligated to neglect users who haven´t given them money after an x ammount of time. However, it´s not a matter of making the phones obsolete earlier, so the users want to buy a new phone faster and therefore getting more money.
    A person will buy a new phone when he/she has the money to do so and wants to do so.
    It´s not a matter of WHEN. It´s a matter of WHAT.
    The question is not "When will that user buy a new phone?", but rather "When that user buys a new phone, whenever that is, what phone will it be?"
    I have a love/hate relationship with Apple. I would never use a iPhone. I would love having any Mac, if someone gives it to me, but I would never spend my harn earned dollars on such an overpriced piece of hardware over general principals.
    However, i do recognice that Steve Jobs was a business genius. Weather you like or love his ideas and methods, he turned a garage project into the biggest company in the world, with a market value higher than Exxon with 1/3 of it´s assets.
    Apple is a money making machine, and that is where the "hate" part of my relationship comes from.
    However, it surprised me a lot to see that they released iOS 6 for the iPhone 3GS, released in 2008!
    That get´s you thinking, that inside all that "SELL NOW" culture Apple is, they also support their older devices that people bought years ago but can't buy a new phone now. However, when they can do it, it will surely be another iPhone. Because they FEEL that the company listens and cares for them.
    Also if you jump from iOS 6 on the 3 GS to a brand new iPhone 5, the transition will be virtually non-existant, except for Siri and a couple of features.
    However jumping from Android 1.5 or 2.1 to Jelly Bean, might not be so easy on some users, making them more likely to give iPhone a shot.
    Since they have to adapt to another phone anyway, they might as well try the apple...
    And for old users, it gives people a sense of continuity and care about the user. Otherwise we feel like being kicked out of a restaurant right after we payed the bill.

  • Trying to understand OIM - Please help

    Hello All,
    I am pretty new to OIM, just trying to understand how OIM works. For the past 4 years I was in Sun IdM and planning to switch over to OIM.
    I read some documentation, I think OIM will perform basic provisioning and it contains out of box connectors to do basic provisoning. I have some questions can anybody please help
    - Sun IdM uses Express language to develop custom workflows or forms, in OIM to develop workflows which language did you use, is it Java or any other language?
    - If I want to provision users on AS/400, HP Open VMS or AIX systems, how can I do that I don't see any out of box connectors for these resources, so in order to integrate these resources do we need to write our own custom connectors?
    - If the out of box connector does not support a specific function on a resource what are the options do we have? for example if the AD connector does not support to delete the exchange mailbox, how your going to perform this in OIM? Do we need to write a Java code for this?
    - How much Java coding is necessary in OIM?
    - Is OIM supports role based provisioning?
    Please share any information.
    Thanks in advance.

    Sun IdM uses Express language to develop custom workflows or forms, in OIM to develop workflows which language did you use, is it Java or any other language?
    - JAVA
    If I want to provision users on AS/400, HP Open VMS or AIX systems, how can I do that I don't see any out of box connectors for these resources, so in order to integrate these resources do we need to write our own custom connectors?
    - If OOTB connectors are not available then you'll have build you own connector as per Target Resource.
    If the out of box connector does not support a specific function on a resource what are the options do we have?
    - You'll have customize their connector as per your requirements or write your own
    How much Java coding is necessary in OIM?
    - We can't calculate how much java. It's totally depends on requirements how much code you'll ahve to write. But everything will be done using Java only.
    - Is OIM supports role based provisioning?
    Here Group represent Role. At a small scale it supports. But for large scale you'll have to use Oracle Role Manager as you do in Sun Role Manager.

  • UTL_HTTP.end_of_body Exception Error.  Trying to Understand the Reason Why?

    I have the following PLSQL Function that returns a End_of_body Error. This started when we migrated from 10g to 11g. It is simple enough to capture so the error does not stop the Function Flow, but the error causes the OCI driver in OBIEE to error, which prevents the use of OBIEE IBOT to execute. Trying to understand why this error is occurring..not sure if we have a permissions issue on the UTL_HTTP Package or what?
    Anyone seen this problem in 11g? Suggests on resolving would be great. Thanks.
    FUNCTION AA_DEMO_PO_WSDL(IN_MESSAGE IN VARCHAR2)
    RETURN VARCHAR IS
    soap_request varchar2(30000);
    soap_respond varchar2(30000);
    http_req utl_http.req;
    http_resp utl_http.resp;
    launch_url varchar2(240) ;
    o_message varchar2(240) ;
    po_amount number := 2000 ;
    total_calls number := 0;
    cursor c_PO_exists is Cursor Logic..
    begin
    total_calls := 0;
    For po_wsdl in c_PO_exists
    LOOP
    total_calls := total_calls + 1;
    soap_request:='<?xml version="1.0" encoding="UTF-8"?>
    <soap:Envelope xmlns:soap="http://schemas.xmlsoap.org/soap/envelope/">
    <soap:Header/>
    <soap:Body xmlns:ns1="http://xmlns.oracle.com/PurchaseOrder_Approval">
    <ns1:ProcessRequest><ns1:input>PO' || po_wsdl.order_no || '</ns1:input></ns1:ProcessRequest>
    </soap:Body>
    </soap:Envelope>';
    Begin
    http_req:= utl_http.begin_request('myURL/PurchaseOrder_Approval/1.0','POST','HTTP/1.1');
    utl_http.set_header(http_req, 'Content-Type', 'text/xml') ;
    utl_http.set_header(http_req, 'Content-Length', length(soap_request)) ;
    utl_http.set_header(http_req, 'SOAPAction', 'initiate');
    utl_http.write_text(http_req, soap_request) ;
    http_resp:= utl_http.get_response(http_req) ;
    utl_http.read_text(http_resp, soap_respond) ;
    utl_http.end_response(http_resp) ;
    Exception
    WHEN UTL_HTTP.end_of_body THEN
    utl_http.end_response(http_resp);
    When utl_http.too_many_requests then
    utl_http.end_response(http_resp);
    o_message := 'End_Reponse' || ' from proc.';
    when OTHERS then
    o_message := SQLERRM || ' from proc.';
    return o_message;
    end;
    END LOOP;
    Return 'Workflow Initiated-' ||to_char(total_calls);
    end AA_DEMO_PO_WSDL;

    Hi, thanks,
    it is oracle10g,
    The Exception is : ORA-29266: end-of-body reached
    ORA-06512: at "SYS.UTL_HTTP", line 1349
    then the line in my function ,
    damorgan wrote:
    But I do note that when I do this I always do a get_header_count and get_header before get_read.what get_read , u mean?
    thanks for the link ,
    appreciated

  • I am trying to understand the licensing procedures for using tabKiller for 3.5.7 firefox. Who should I contact for this? I do not see any customer service phone number for Firefox

    I am trying to understand the licensing procedures for using tabKiller for 3.5.7 firefox. Who should I contact for this? I do not have the customer service phone number for Firefox

    Tab Killer is not created by Mozilla, it is created by a private individual who has made it available at no cost for other people to use.

  • [SOLVED] Trying to understand the "size on disk" concept

    Hi all,
    I was trying to understand the difference between "size" and "size on disk".
    A google search gave plenty of results and I thought I got a clear idea about
    it.. All data is stored in small chunks of a fixed size depending on the
    filesystem and the last chunk is going to have some wasted space which
    will *have* to be allocated.. Thus the extra space on disk.
    However I'm still confused.. When I look at my home folder, the size on disk
    is more than 320 GB, where as my partition is actually less than 80 GB, so
    I guess I'm missing something.. Could somebody explain to me what does
    320 GB of 'size on disk' means?
    Thanks a lot in advance..
    Last edited by geo909 (2011-12-15 23:17:25)

    Hi all,
    Thank you for your replies.. My file manager is indeed pcman fm and
    indeed it seems to be an issue.. In b4data's link the last post reads:
    B-Con wrote:
    FWIW, I found the problem. (This bug is still persistent in v0.9.9.)
    My (new) PCManFM bug report is here: http://sourceforge.net/tracker/index.ph … tid=801864
    I submitted a potential patch here: http://sourceforge.net/tracker/?func=de … tid=801866
    Bottom line is that the file's block size and block count data from the file's inode wasn't being interpreted and used properly. The bug is in PCManFM, not any dependent libraries. Details are in the bug report.
    Since PCManFM is highly used by the Arch community, I figured I should post an update here so that there's some record of it in our community. Hopefully this gets addressed by the developer(s) relatively soon. :-)
    I guess that pretty much explains things.. And I think I did understand the 'size on disk' concept
    anyway
    Thanks again!
    Last edited by geo909 (2011-12-15 23:17:10)

  • Trying to understand the sound system

    Here's my problem. My mic didn't work (neither the front mic nor the line-in in the rear), so after some research and trial and error I found that if I do
    modprobe soundcore
    my mic works on both the jacks
    But here's where my confusion lies. This is the output of lsmod |grep snd before probing explicitly for soundcore
    [inxs ~ ]$ lsmod |grep snd
    snd_hda_codec_analog 78696 1
    snd_hda_intel 22122 1
    snd_hda_codec 77927 2 snd_hda_codec_analog,snd_hda_intel
    snd_hwdep 6325 1 snd_hda_codec
    snd_pcm_oss 38818 0
    snd_pcm 73856 3 snd_hda_intel,snd_hda_codec,snd_pcm_oss
    snd_timer 19416 1 snd_pcm
    snd_page_alloc 7121 2 snd_hda_intel,snd_pcm
    snd_mixer_oss 15275 2 snd_pcm_oss
    snd 57786 8 snd_hda_codec_analog,snd_hda_intel,snd_hda_codec,snd_hwdep,snd_pcm_oss,snd_pcm,snd_timer,snd_mixer_oss
    soundcore 6146 2 snd
    [inxs ~ ]$
    So as you can see, soundcore's already loaded, so why do I have to explicitly load it again to get the mic to work?
    Once I add soundcore to my MODULES array and reboot, the lsmod output is also the same as above.
    So my question is -- what does the explicit loading of soundcore do, that is not done by auto-loading of that module?

    Oh... since your topic is Trying to understand the sound system, that puts you (and me) inside the whole world's population... chuckle. But I thought I'd pass along a document written by probably "The" main ALSA developer that I totally stumbled across just 3 days ago.
    Go here:
    http://kernel.org/pub/linux/kernel/people/tiwai/docs/
    and download the flavor of your choice of the "HD-Audio" document, or simply view it online. It documents the deepest dive into the current ALSA snd_hda_* layers and issues that I've found to date (but still leaves me wanting).
    Why that document isn't plastered across the interwebs is beyond me. I only get 11 hits when I search for it... such are the secrets of the ALSA world I guess.
    Last edited by pigiron (2011-08-26 18:26:48)

  • Trying to understand the MODEL clause

    Hi All,
    I'm finally biting the bullet and learning how to use the model clause, but I'm having a bit of trouble.
    The following example data comes from a book "Advanced CQL Functions in Oracle 10g".
    with sales1 as (select 'blueberries' product
                          ,'pensacola' location
                          ,9000 amount
                          ,2001 year
                      from dual
                    union all
                    select 'cotton', 'pensacola',16000,2001 from dual
                    union all
                    select 'lumber','pensacola',3500,2001 from dual
                    union all
                    select 'cotton','mobile',24000,2001 from dual
                    union all
                    select 'lumber', 'mobile',2800,2001 from dual
                    union all
                    select 'plastic','mobile',32000,2001 from dual
                    union all
                    select 'blueberries','pensacola',9000,2002 from dual
                    union all
                    select 'cotton', 'pensacola',16000,2002 from dual
                    union all
                    select 'lumber','pensacola',3500,2002 from dual
                    union all
                    select 'cotton','mobile',24000,2002 from dual
                    union all
                    select 'lumber', 'mobile',2800,2002 from dual
                    union all
                    select 'plastic','mobile',32000,2002 from dual
                    union all
                    select 'blueberries','pensacola',9000,2003 from dual
                    union all
                    select 'cotton', 'pensacola',16000,2003 from dual
                    union all
                    select 'lumber','pensacola',3500,2003 from dual
                    union all
                    select 'cotton','mobile',24000,2003 from dual
                    union all
                    select 'lumber', 'mobile',2800,2003 from dual
                    union all
                    select 'plastic','mobile',32000,2003 from dual
    select location, product, year, s
    from sales1
    model
    --return updated rows
    partition by (product)
    dimension by (location,year)
    measures (amount s) ignore nav
    (s['pensacola',2003] = sum(s)['pensacola',cv() > cv()-1])I would have expected the measures clause to return the sum of all amounts for pensacola where the year > 2003 - 1 = 2002. which would make the total for [blueberries,2003] = 1800, but instead it comes out as 27000, apparently summing all values for blueberries for that partition.... equivalent to sum(s)['pensacola',ANY].
    how would I go about making s['pensacola',2003] = the sum of itself plus the previous row?
    I realise I can do
    s['pensacola',cv()]+s['pensacola',cv()-1]but I'm really trying to understand why what I have doesn't appear to work the way I expect.

    Because
    (s['pensacola',2003] = sum(s)['pensacola',cv() > cv()-1])
    means
    (s['pensacola',2003] = sum(s)['pensacola',cv(year) > cv(year)-1])
    means
    (s['pensacola',2003] = sum(s)['pensacola',2003 > 2003-1])
    means
    (s['pensacola',2003] = sum(s)['pensacola',2003 > 2002])
    means
    (s['pensacola',2003] = sum(s)['pensacola',year is any])
    s['pensacola',cv()]+s['pensacola',cv()-1]
    means
    sum(s)['pensacola',year between cv()-1 and cv()]

Maybe you are looking for