Large number values shown as blank?

Values larger than 2147483647 seem to be displayed as blank in the data grid view. When you click on them to edit, the correct value shows up. If I then press Escape, the value disappears again. Is this a bug or a preference setting I haven't found yet?

Problem told must the same be in:
acctno is null when displaying data

Similar Messages

  • Passing a large number of column values to an Oracle insert procedure

    I am quite new to the ODP space, so can someone please tell me what's the best and most efficient way to pass a large number of column values to an Oracle procedure from my C# program ? Passing a small number of values as parameters seem OK but when there are many, this seems inelegant.

    Passing a small number
    of values as parameters seem OK but when there are
    many, this seems inelegant.Is it possible that your table with a staggering amount of columns or method that collapses without so many inputs is ultimately what is inelegant?
    I once did a database conversion from VAX RMS system with a "table" with 11,000 columns to a normalized schema in an Oracle database. That was inelegant.
    Michael O
    http://blog.crisatunity.com

  • Business connector scheduler gets hanged and Next run value is large number

    Hi All,
    I see the scheduler in business connector gets hanged and the Next run value shows huge/large number like 9223370831700598.0 sec. Please can anyone suggest what can be done.
    Currently BC ver is 4.7. The problem gets resolved every time when i restart the server.

    Hi,
    Not aware of the reason though, I guess you must be using the simple scheduled tasks.
    Try using complex reapting tasks where you specify days , minutes etc.
    It should work fine.
    Hope it helps. Reward if useful.
    Regards,
    Siddhesh S.Tawate

  • Large number of event Log entries: connection open...

    Hi,
    I am seeing a large number of entries in the event log of the type:
    21:49:17, 11 Mar.
    IN: ACCEPT [57] Connection closed (Port Forwarding: TCP 192.168.1.78:14312 <-->86.128.58.172:14312 [81.154.101.160:51163] CLOSED/TIME_WAIT ppp0 NAPT)
    21:49:15, 11 Mar.
    IN: ACCEPT [54] Connection opened (Port Forwarding: UDP 192.168.1.78:14312 <-->86.128.58.172:14312 [81.154.101.160:41820] ppp0 NAPT)
    Are these anything I should be concerned about? I have tried a couple of forum and Google searches, but I don't quite know where to start beyond pasting the first bit of the message. I haven't found anything obvious from those searches.
    DHCP table lists 192.168.1.78 as the desktop PC on which I'm writing this.
    Please could you point me in the direction of any resources that will help me to work out if I should be worried about this?
    A slightly longer extract is shown below:
    21:49:17, 11 Mar.
    IN: ACCEPT [57] Connection closed (Port Forwarding: TCP 192.168.1.78:14312 <-->86.128.58.172:14312 [81.154.101.160:51163] CLOSED/TIME_WAIT ppp0 NAPT)
    21:49:15, 11 Mar.
    IN: ACCEPT [54] Connection opened (Port Forwarding: UDP 192.168.1.78:14312 <-->86.128.58.172:14312 [81.154.101.160:41820] ppp0 NAPT)
    21:49:15, 11 Mar.
    IN: ACCEPT [54] Connection opened (Port Forwarding: TCP 192.168.1.78:14312 <-->86.128.58.172:14312 [81.154.101.160:51163] CLOSED/SYN_SENT ppp0 NAPT)
    21:49:11, 11 Mar.
    IN: ACCEPT [57] Connection closed (Port Forwarding: TCP 192.168.1.78:14312 <-->86.128.58.172:14312 [213.205.231.156:51027] TIME_WAIT/CLOSED ppp0 NAPT)
    21:49:03, 11 Mar.
    IN: ACCEPT [54] Connection opened (Port Forwarding: TCP 192.168.1.78:14312 <-->86.128.58.172:14312 [178.190.63.75:55535] CLOSED/SYN_SENT ppp0 NAPT)
    21:49:00, 11 Mar.
    IN: ACCEPT [57] Connection closed (Port Forwarding: TCP 192.168.1.78:14312 <-->86.128.58.172:14312 [2.96.4.85:23939] TIME_WAIT/CLOSED ppp0 NAPT)
    21:48:59, 11 Mar.
    IN: ACCEPT [57] Connection closed (Port Forwarding: TCP 192.168.1.78:14312 <-->86.128.58.172:14312 [78.144.143.222:21617] CLOSED/TIME_WAIT ppp0 NAPT)
    21:48:58, 11 Mar.
    IN: ACCEPT [54] Connection opened (Port Forwarding: UDP 192.168.1.78:14312 <-->86.128.58.172:14312 [41.218.222.34:28188] ppp0 NAPT)
    21:48:57, 11 Mar.
    IN: ACCEPT [54] Connection opened (Port Forwarding: TCP 192.168.1.78:14312 <-->86.128.58.172:14312 [41.218.222.34:28288] CLOSED/SYN_SENT ppp0 NAPT)
    21:48:57, 11 Mar.
    IN: ACCEPT [54] Connection opened (Port Forwarding: UDP 192.168.1.78:14312 <-->86.128.58.172:14312 [86.132.123.255:18048] ppp0 NAPT)
    21:48:57, 11 Mar.
    IN: ACCEPT [54] Connection opened (Port Forwarding: TCP 192.168.1.78:14312 <-->86.128.58.172:14312 [86.132.123.255:54199] CLOSED/SYN_SENT ppp0 NAPT)
    21:48:55, 11 Mar.
    IN: ACCEPT [57] Connection closed (Port Forwarding: UDP 192.168.1.78:14312 <-->86.128.58.172:14312 [86.144.91.49:60704] ppp0 NAPT)
    21:48:55, 11 Mar.
    IN: ACCEPT [57] Connection closed (Port Forwarding: TCP 192.168.1.78:14312 <-->86.128.58.172:14312 [80.3.100.12:50875] TIME_WAIT/CLOSED ppp0 NAPT)
    21:48:45, 11 Mar.
    IN: ACCEPT [57] Connection closed (Port Forwarding: UDP 192.168.1.78:14312 <-->86.128.58.172:14312 [78.150.251.216:57656] ppp0 NAPT)
    21:48:39, 11 Mar.
    IN: ACCEPT [54] Connection opened (Port Forwarding: TCP 192.168.1.78:14312 <-->86.128.58.172:14312 [78.150.251.216:56975] CLOSED/SYN_SENT ppp0 NAPT)
    21:48:29, 11 Mar.
    IN: ACCEPT [54] Connection opened (Port Forwarding: TCP 192.168.1.78:14312 <-->86.128.58.172:14312 [79.99.145.46:8368] CLOSED/SYN_SENT ppp0 NAPT)
    21:48:27, 11 Mar.
    IN: ACCEPT [54] Connection opened (Port Forwarding: UDP 192.168.1.78:14312 <-->86.128.58.172:14312 [90.192.249.173:45250] ppp0 NAPT)
    21:48:16, 11 Mar.
    IN: ACCEPT [57] Connection closed (Port Forwarding: UDP 192.168.1.78:14312 <-->86.128.58.172:14312 [212.17.96.246:62447] ppp0 NAPT)
    21:48:10, 11 Mar.
    IN: ACCEPT [57] Connection closed (Port Forwarding: TCP 192.168.1.78:14312 <-->86.128.58.172:14312 [82.16.198.117:49942] TIME_WAIT/CLOSED ppp0 NAPT)
    21:48:08, 11 Mar.
    IN: ACCEPT [54] Connection opened (Port Forwarding: TCP 192.168.1.78:14312 <-->86.128.58.172:14312 [213.205.231.156:51027] CLOSED/SYN_SENT ppp0 NAPT)
    21:48:04, 11 Mar.
    IN: ACCEPT [57] Connection closed (Port Forwarding: TCP 192.168.1.78:14312 <-->86.128.58.172:14312 [89.153.251.9:53729] TIME_WAIT/CLOSED ppp0 NAPT)
    21:47:54, 11 Mar.
    IN: ACCEPT [54] Connection opened (Port Forwarding: UDP 192.168.1.78:14312 <-->86.128.58.172:14312 [80.3.100.12:37150] ppp0 NAPT)

    Hi,
    Thank you for the response. I think, but can't remember for sure, that UPnP was already switched off when I captured that log. Anyway, even if it wasn't, it is now. So I will see what gets captured in my logs.
    I've just had to restart my Home Hub because of other connection issues and I notice that the first few entries are also odd:
    19:35:16, 12 Mar.
    OUT: BLOCK [65] First packet is Invalid (TCP 192.168.1.78:49250->173.194.78.125:5222 on ppp0)
    19:34:45, 12 Mar.
    OUT: BLOCK [15] Default policy (First packet in connection is not a SYN packet: TCP 192.168.1.78:49266->173.194.34.101:443 on ppp0)
    19:34:31, 12 Mar.
    OUT: BLOCK [65] First packet is Invalid (TCP 192.168.1.78:49250->173.194.78.125:5222 on ppp0)
    19:34:31, 12 Mar.
    OUT: BLOCK [65] First packet is Invalid (TCP 192.168.1.78:49266->173.194.34.101:443 on ppp0)
    19:34:04, 12 Mar.
    OUT: BLOCK [15] Default policy (First packet in connection is not a SYN packet: TCP 192.168.1.78:49462->199.59.149.232:443 on ppp0)
    19:33:46, 12 Mar.
    OUT: BLOCK [65] First packet is Invalid (TCP 192.168.1.78:49250->173.194.78.125:5222 on ppp0)
    19:33:46, 12 Mar.
    IN: BLOCK [12] Spoofing protection (IGMP 86.164.178.188->224.0.0.22 on ppp0)
    19:33:45, 12 Mar.
    OUT: BLOCK [65] First packet is Invalid (TCP 192.168.1.78:49266->173.194.34.101:443 on ppp0)
    19:33:39, 12 Mar.
    OUT: BLOCK [65] First packet is Invalid (TCP 192.168.1.78:49462->199.59.149.232:443 on ppp0)
    19:33:33, 12 Mar.
    OUT: BLOCK [15] Default policy (First packet in connection is not a SYN packet: TCP 192.168.1.78:49463->199.59.149.232:443 on ppp0)
    19:33:29, 12 Mar.
    IN: BLOCK [15] Default policy (UDP 111.252.36.217:26328->86.164.178.188:12708 on ppp0)
    19:33:16, 12 Mar.
    IN: BLOCK [15] Default policy (TCP 193.113.4.153:80->86.164.178.188:49572 on ppp0)
    19:33:14, 12 Mar.
    OUT: BLOCK [65] First packet is Invalid (TCP 192.168.1.78:49266->173.194.34.101:443 on ppp0)
    19:33:14, 12 Mar.
    IN: BLOCK [15] Default policy (TCP 66.193.112.93:443->86.164.178.188:44266 on ppp0)
    19:33:14, 12 Mar.
    ( 164.240000) CWMP: session completed successfully
    19:33:13, 12 Mar.
    ( 163.700000) CWMP: HTTP authentication success from https://pbthdm.bt.mo
    19:33:05, 12 Mar.
    BLOCKED 106 more packets (because of Default policy)
    19:33:05, 12 Mar.
    OUT: BLOCK [65] First packet is Invalid (TCP 192.168.1.78:49462->199.59.149.232:443 on ppp0)
    19:33:05, 12 Mar.
    IN: BLOCK [15] Default policy (TCP 213.1.72.209:80->86.164.178.188:49547 on ppp0)
    19:33:05, 12 Mar.
    BLOCKED 94 more packets (because of Default policy)
    19:33:05, 12 Mar.
    OUT: BLOCK [15] Default policy (First packet in connection is not a SYN packet: TCP 192.168.1.78:49330->173.194.67.94:443 on ppp0)
    19:33:05, 12 Mar.
    IN: BLOCK [15] Default policy (TCP 199.59.148.87:443->86.164.178.188:49531 on ppp0)
    19:33:05, 12 Mar.
    OUT: BLOCK [65] First packet is Invalid (TCP 192.168.1.78:49250->173.194.78.125:5222 on ppp0)
    19:33:04, 12 Mar.
    OUT: BLOCK [65] First packet is Invalid (TCP 192.168.1.78:49266->173.194.34.101:443 on ppp0)
    19:33:04, 12 Mar.
    ( 155.110000) CWMP: Server URL: https://pbthdm.bt.mo; Connecting as user: ACS username
    19:33:04, 12 Mar.
    ( 155.090000) CWMP: Session start now. Event code(s): '1 BOOT,4 VALUE CHANGE'
    19:32:59, 12 Mar.
    OUT: BLOCK [65] First packet is Invalid (TCP 192.168.1.78:49266->173.194.34.101:443 on ppp0)
    19:32:54, 12 Mar.
    OUT: BLOCK [65] First packet is Invalid (TCP 192.168.1.78:49462->199.59.149.232:443 on ppp0)
    19:32:53, 12 Mar.
    OUT: BLOCK [65] First packet is Invalid (TCP 192.168.1.78:49330->173.194.67.94:443 on ppp0)
    19:32:52, 12 Mar.
    OUT: BLOCK [65] First packet is Invalid (TCP 192.168.1.78:49463->199.59.149.232:443 on ppp0)
    19:32:51, 12 Mar.
    OUT: BLOCK [65] First packet is Invalid (TCP 192.168.1.78:49266->173.194.34.101:443 on ppp0)
    19:32:48, 12 Mar.
    OUT: BLOCK [65] First packet is Invalid (TCP 192.168.1.78:49330->173.194.67.94:443 on ppp0)
    19:32:47, 12 Mar.
    OUT: BLOCK [65] First packet is Invalid (TCP 192.168.1.78:49266->173.194.34.101:443 on ppp0)
    19:32:46, 12 Mar.
    OUT: BLOCK [65] First packet is Invalid (TCP 192.168.1.78:49330->173.194.67.94:443 on ppp0)
    19:32:46, 12 Mar.
    BLOCKED 4 more packets (because of First packet is Invalid)
    19:32:45, 12 Mar.
    OUT: BLOCK [15] Default policy (First packet in connection is not a SYN packet: TCP 192.168.1.78:49461->199.59.149.232:443 on ppp0)
    19:32:44, 12 Mar.
    OUT: BLOCK [65] First packet is Invalid (TCP 192.168.1.78:49330->173.194.67.94:443 on ppp0)
    19:32:44, 12 Mar.
    BLOCKED 1 more packets (because of First packet is Invalid)
    19:32:43, 12 Mar.
    OUT: BLOCK [15] Default policy (First packet in connection is not a SYN packet: TCP 192.168.1.78:49398->193.113.4.153:80 on ppp0)
    19:32:42, 12 Mar.
    OUT: BLOCK [65] First packet is Invalid (TCP 192.168.1.78:49330->173.194.67.94:443 on ppp0)
    19:32:42, 12 Mar.
    BLOCKED 3 more packets (because of First packet is Invalid)
    19:32:42, 12 Mar.
    OUT: BLOCK [15] Default policy (First packet in connection is not a SYN packet: TCP 192.168.1.78:49277->119.254.30.32:443 on ppp0)
    19:32:41, 12 Mar.
    OUT: BLOCK [65] First packet is Invalid (TCP 192.168.1.78:49330->173.194.67.94:443 on ppp0)
    19:32:41, 12 Mar.
    BLOCKED 1 more packets (because of First packet is Invalid)
    19:32:41, 12 Mar.
    OUT: BLOCK [65] First packet is Invalid (TCP 192.168.1.78:49330->173.194.67.94:443 on ppp0)
    19:32:38, 12 Mar.
    OUT: BLOCK [15] Default policy (First packet in connection is not a SYN packet: TCP 192.168.1.78:49280->119.254.30.32:443 on ppp0)
    19:32:36, 12 Mar.
    OUT: BLOCK [65] First packet is Invalid (TCP 192.168.1.78:49330->173.194.67.94:443 on ppp0)
    19:32:34, 12 Mar.
    OUT: BLOCK [65] First packet is Invalid (TCP 192.168.1.78:49463->199.59.149.232:443 on ppp0)
    19:32:30, 12 Mar.
    IN: BLOCK [15] Default policy (TCP 66.193.112.93:443->86.164.178.188:47022 on ppp0)
    19:32:30, 12 Mar.
    ( 120.790000) CWMP: session closed due to error: WGET TLS error
    19:32:30, 12 Mar.
    ( 120.140000) NTP synchronization success!
    19:32:30, 12 Mar.
    BLOCKED 1 more packets (because of Default policy)
    19:32:29, 12 Mar.
    OUT: BLOCK [15] Default policy (First packet in connection is not a SYN packet: TCP 192.168.1.78:49458->217.41.223.234:80 on ppp0)
    19:32:28, 12 Mar.
    OUT: BLOCK [65] First packet is Invalid (TCP 192.168.1.78:49280->119.254.30.32:443 on ppp0)
    19:32:26, 12 Mar.
    ( 116.030000) NTP synchronization start
    19:32:25, 12 Mar.
    OUT: BLOCK [15] Default policy (First packet in connection is not a SYN packet: TCP 192.168.1.78:49442->74.125.141.91:443 on ppp0)
    19:32:25, 12 Mar.
    OUT: BLOCK [15] Default policy (TCP 192.168.1.78:49310->204.154.94.81:443 on ppp0)
    19:32:25, 12 Mar.
    IN: BLOCK [15] Default policy (TCP 88.221.94.116:80->86.164.178.188:49863 on ppp0)

  • How to handle a large number of query parameters for a Browse screen

    I need to implement an advanced search functionality in a browse screen for a large table.  The table has 80+ columns and therefore will have a large number of possible query parameters.  The screen will be built on a modeled query with all
    of the parameters marked as optional.  Given the large number of parameters, I am thinking that it would be better to use a separate screen to receive the parameter input from the user, rather than a Popup.  Is it possible for example to have a search
    button on the browse screen (screen a) open a new screen (screen b) that contains all of the search parameters, have the user enter the parameters they want, then click a button to send all of the parameters back to screen a where the query is executed and
    the search results are returned to the table control?  This would effectively make screen b an advanced modal window for screen a.  In addition, if the user were to execute the query, then want to change a parameter, they would need to be able to
    re-open screen b and have all of their original parameters still set.  How would you implement this, or otherwise deal with a large number of optional query parameters in the html client?  My initial thinking is to store all of the parameters in
    an object and use beforeShown/afterClosed to pass them between the screens, but I'm not quite sure how to make that work.  TIA

    Wow Josh, thanks.  I have a lot of reading to do.  What I ultimately plan to do with this (my other posts relate to this too), is have a separate screen for advanced filtering that also allows the user to save their queries if desired. 
    There is an excellent way to get at all of the query information in the Query_Executed() method.  I just put an extra Boolean parameter in the query called "SaveQuery" and when true, the Query_Executed event triggers an entry into a table with
    the query name, user name, and parameter value pairs that the user entered.  Upon revisiting the screen, I want the user to be able to select from their saved queries and load all the screen parameters (screen properties) from their selected query. 
    I almost have it working.  It may be as easy as marking all of the screen properties that are query parameters as screen parameters (not required), then passing them in from the saved query data (filtered by username, queryname, and selected
    item).  I'll post an update once I get it.  Probably will have some more questions as I go through it.  Thanks again! 

  • Import: CVD Base Value field is blank in MIGO

    All SAP gurus,
    We are running a Import's scenario.
    For this we have created Z condition types for CVD and BCD,  maintained all the condition types in pricing procedure (Customs Duty and CVD etc)
    Excise defauls are also maintained.
    problem we are facing is:
    Base value field is blank in MIGO and CVD amounts are not coming automatially from PO.
    When we maintained the values for CVD manually, accounting entry is correct.
    Which settings might be missing.
    Regards,

    HI,
    You need to enter the custom invoice number while doing GR wrt to Import PO, it will ask you the commercial invoice number which is your custom clearing agent invoice number once you put the invoice number and the year of the invoice, the base value will be come from there. Also check quantity with which custom commercial invoice was posted in the system should be equal to GR qty.
    BR
    Edited by: Sujoy on Feb 20, 2010 11:20 AM

  • How do I create new versions of a large number of images, and place them in a new location?

    Hello!
    I have been using Aperture for years, and have just one small problem.  There have been many times where I want to have multiple versions of a large number of images.  I like to do a color album and B&W album for example.
    Previously, I would click on all the images at one, and select new version.  The problem is this puts all of the new versions in a stack.  I then have to open all the stacks, and one by one move the new versions to a different album.  Is there any way to streamline this proccess?  When it's only 10 images, it's no problem.  When it's a few hundred (or more) its rather time consuming..
    What I'm hoping for is a way to either automatically have new versions populate a separate album, or for a way to easily select all the new versions I create at one time, and simply move them with as few steps as possible to a new destination.
    Thanks for any help,
    Ricardo

    Ricardo,
    in addition to Kirby's and phosgraphis's excellent suggestions, you may want to use the filters to further restrict your versions to the ones you want to access.
    For example, you mentioned
      I like to do a color album and B&W album for example.
    You could easily separate the color versions from the black-and-white versions by using the filter rule:
    Adjustment includes Black&white
         or
    Adjustment does not include Black&white
    With the above filter setting (Add rule > Adjustment includes Black&White) only the versions with Black&White adjustment are shown in the Browers. You could do similar to separate cropped versions from uncropped ones.
    Regards
    Léonie

  • Barcode CODE 128 with large number (being rounded?) (BI / XML Publisher 5.6.3)

    After by applying Patch 9440398 as per Oracle's Doc ID 1072226.1, I have successfully created a CODE 128 barcode.
    But I am having an issue when creating a barcode whose value is a large number. Specifically, a number larger than around 16 or so digits.
    Here's my situation...
    In my RTF template I am encoding a barcode for the number 420917229102808239800004365998 as follows:
    <?format-barcode:420917229102808239800004365998;'code128c'?>
    I then run the report and a PDF is generated with the barcode. Everything looks great so far.
    But when I scan the barcode, this is the value I am reading (tried it with several different scanner types):
    420917229102808300000000000000
    So:
         Value I was expecting:     420917229102808239800004365998
         Value I actually got:         420917229102808300000000000000
    It seems as if the number is getting rounded at the 16th digit (or so, it varies depending of the value I use).
    I have tried several examples and all seem to do the same.  But anything with 15 digits or less seems to works perfectly.
    Any ideas?
    Manny

    Yes, I have.
    But I have found the cause now.
    When working with parameters coming in from the concurrent manager, all the parameters define in the concurrent program in EBS need to be in the same case (upper, lower) as they have been defined in the data template.
    Once I changed all to be the same case, it worked.
    thanks for the effort.
    regards
    Ronny

  • Communicate large number of parameters and variables between Verstand and Labview Model

    We have a dyno setup with a PXI-E chassis running Veristand 2014 and Inertia 2014. In order to enhance capabilities and timing of Veristand, I would like to use Labview models to perform tasks not possible by Veristand and Inertia. An example of this is to determine the maximum of a large number of thermocouples. Veristand has a compare funtion, but it compares only two values at a time. This makes for some lengthy and inflexible programming. Labview, on the other hand, has a function which aloows one to get the maximum of elements in an array in a single step. To use Labview I need to "send" the 50 or so thermocouples to the Labview model. In addition to the variables which need to be communicated between Veristand and Labview, I also need to present Labview with the threshold and confguration parameters. From the forums and user manuaIs understand that one has to use the connector pane in Labview and mapping in Veristand System Explorer to expose the inports and outports. The problem is that the Labview connector pane is limited to 27 I/O. How do I overcome that limitation?
    BTW. I am fairly new to Labview and Versitand.
    Thank you.
    Richard
    Solved!
    Go to Solution.

    @Jarrod:
    Thank you for the help. I created a simple test model and now understand how I can use clusters for a large number of variables. Regarding the mapping process: Can one map a folder of user channels to a cluster (one-step mapping)? Alternatively, I understand one can import a mapping (text) file in System Explorer. Is this import partial or does it replace all the mapping? The reason I am asking is that, if it is partial, then I can have separate mapping files for different configurations and my final mapping can be a combination of imported mapping files.
    @SteveK:
    Thank you for the hint on using a Custom Device. I understand that the Custom Device will be much more powerful and can be more generic. The problem at this stage is that my limitations in programming in Labview is far gretater than Labview models' limitations in Veristand. I'll definitely consider the Custom Device route once I am more provicient with LabView. Hopefully I'll be able to re-use some of the VI's I created for the LabView models.
    Thanks
    Richard

  • Large number of JSP performance

    Hi,
    a colleague of me made tests with a large number of JSP and identified a
    performance problem.
    I believe I found a solution to his problem. I tested it with WLS 5.1 SP2
    and SP3 and MS jview SDK 4.
    The issue was related to the duration of the initial call of the nth JSP,
    our situation as we are doing site hosting.
    The solution is able to perform around 14 initial invocations/s no matter if
    the invocation is the first one or the
    3000th one and the throughput can go up to 108 JSPs/s when the JSP are
    already loaded, the JSPs being the
    snoopservlet example copied 3000 times.
    The ratio have more interest than the values as the test machine (client and
    WLS 5.1) was a 266Mhz laptop.
    I repeat the post of Marc on 2/11/2000 as it is an old one:
    Hi all,
    I'm wondering if any of you has experienced performance issue whendeploying
    a lot of JSPs.
    I'm running Weblogic 4.51SP4 with performance pack on NT4 and Jdk1.2.2.
    I deployed over 3000 JSPs (identical but with a distinct name) on myserver.
    I took care to precompile them off-line.
    To run my tests I used a servlet selecting randomly one of them and
    redirecting the request.
    getServletContext().getRequestDispatcher(randomUrl).forward(request,response);
    The response-time slow-down dramaticaly as the number of distinct JSPs
    invoked is growing.
    (up to 100 times the initial response time).
    I made some additional tests.
    When you set the properties:
    weblogic.httpd.servlet.reloadCheckSecs=-1
    weblogic.httpd.initArgs.*.jsp=..., pageCheckSeconds=-1, ...
    Then the response-time for a new JSP seems linked to a "capacity increase
    process" and depends on the number of previously activated JSPs. If you
    invoke a previously loaded page the server answers really fast with no
    delay.
    If you set previous properties to any other value (0 for example) the
    response-time remains bad even when you invoke a previously loaded page.SOLUTION DESCRIPTION
    Intent
    The package described below is design to allow
    * Fast invocation even with a large number of pages (which can be the case
    with Web Hosting)
    * Dynamic update of compiled JSP
    Implementation
    The current implementation has been tested with JDK 1.1 only and works with
    MS SDK 4.0.
    It has been tested with WLS 5.1 with service packs 2 and 3.
    It should work with most application servers, as its requirements are
    limited. It requires
    a JSP to be able to invoke a class loader.
    Principle
    For a fast invocation, it does not support dynamic compilation as described
    in the JSP
    model.
    There is no automatic recognition of modifications. Instead a JSP is made
    available to
    invalidate pages which must be updated.
    We assume pages managed through this package to be declared in
    weblogic.properties as
    weblogic.httpd.register.*.ocg=ocgLoaderPkg.ocgServlet
    This definition means that, when a servlet or JSP with a .ocg extension is
    requested, it is
    forwarded to the package.
    It implies 2 things:
    * Regular JSP handling and package based handling can coexist in the same
    Application Server
    instance.
    * It is possible to extend the implementation to support many extensions
    with as many
    package instances.
    The package (ocgLoaderPkg) contains 2 classes:
    * ocgServlet, a servlet instantiating JSP objects using a class loader.
    * ocgLoader, the class loader itself.
    A single class loader object is created.
    Both the JSP instances and classes are cached in hashtables.
    The invalidation JSP is named jspUpdate.jsp.
    To invalidate an JSP, it has simply to remove object and cache entries from
    the caches.
    ocgServlet
    * Lazily creates the class loader.
    * Retrieves the target JSP instance from the cache, if possible.
    * Otherwise it uses the class loader to retrieve the target JSP class,
    create a target JSP
    instance and stores it in the cache.
    * Forwards the request to the target JSP instance.
    ocgLoader
    * If the requested class has not the extension ocgServlet is configured to
    process, it
    behaves as a regular class loader and forwards the request to the parent
    or system class
    loader.
    * Otherwise, it retrieves the class from the cache, if possible.
    * Otherwise, it loads the class.
    Do you thing it is a good solution?
    I believe that solution is faster than standard WLS one, because it is a
    very small piece of code but too because:
    - my class loader is deterministic, if the file has the right extension I
    don't call the classloader hierarchy first
    - I don't try supporting jars. It has been one of the hardest design
    decision. We definitely need a way to
    update a specific page but at the same time someone post us NT could have
    problems handling
    3000 files in the same directory (it seems he was wrong).
    - I don't try to check if a class has been updated. I have to ask for
    refresh using a JSP now but it could be an EJB.
    - I don't try to check if a source has been updated.
    - As I know the number of JSP I can set pretty accurately the initial
    capacity of the hashtables I use as caches. I
    avoid rehash.

    Use a profiler to find the bottlenecks in the system. You need to determine where the performance problems (if you even have any) are happening. We can't do that for you.

  • Large number of JSP performance [repost for grandemange]

    Hi,
    a colleague of me made tests with a large number of JSP and identified a
    performance problem.
    I believe I found a solution to his problem. I tested it with WLS 5.1 SP2
    and SP3 and MS jview SDK 4.
    The issue was related to the duration of the initial call of the nth JSP,
    our situation as we are doing site hosting.
    The solution is able to perform around 14 initial invocations/s no matter if
    the invocation is the first one or the
    3000th one and the throughput can go up to 108 JSPs/s when the JSP are
    already loaded, the JSPs being the
    snoopservlet example copied 3000 times.
    The ratio have more interest than the values as the test machine (client and
    WLS 5.1) was a 266Mhz laptop.
    I repeat the post of Marc on 2/11/2000 as it is an old one:
    Hi all,
    I'm wondering if any of you has experienced performance issue whendeploying
    a lot of JSPs.
    I'm running Weblogic 4.51SP4 with performance pack on NT4 and Jdk1.2.2.
    I deployed over 3000 JSPs (identical but with a distinct name) on myserver.
    I took care to precompile them off-line.
    To run my tests I used a servlet selecting randomly one of them and
    redirecting the request.
    getServletContext().getRequestDispatcher(randomUrl).forward(request,response);
    The response-time slow-down dramaticaly as the number of distinct JSPs
    invoked is growing.
    (up to 100 times the initial response time).
    I made some additional tests.
    When you set the properties:
    weblogic.httpd.servlet.reloadCheckSecs=-1
    weblogic.httpd.initArgs.*.jsp=..., pageCheckSeconds=-1, ...
    Then the response-time for a new JSP seems linked to a "capacity increase
    process" and depends on the number of previously activated JSPs. If you
    invoke a previously loaded page the server answers really fast with no
    delay.
    If you set previous properties to any other value (0 for example) the
    response-time remains bad even when you invoke a previously loaded page.SOLUTION DESCRIPTION
    Intent
    The package described below is design to allow
    * Fast invocation even with a large number of pages (which can be the case
    with Web Hosting)
    * Dynamic update of compiled JSP
    Implementation
    The current implementation has been tested with JDK 1.1 only and works with
    MS SDK 4.0.
    It has been tested with WLS 5.1 with service packs 2 and 3.
    It should work with most application servers, as its requirements are
    limited. It requires
    a JSP to be able to invoke a class loader.
    Principle
    For a fast invocation, it does not support dynamic compilation as described
    in the JSP
    model.
    There is no automatic recognition of modifications. Instead a JSP is made
    available to
    invalidate pages which must be updated.
    We assume pages managed through this package to be declared in
    weblogic.properties as
    weblogic.httpd.register.*.ocg=ocgLoaderPkg.ocgServlet
    This definition means that, when a servlet or JSP with a .ocg extension is
    requested, it is
    forwarded to the package.
    It implies 2 things:
    * Regular JSP handling and package based handling can coexist in the same
    Application Server
    instance.
    * It is possible to extend the implementation to support many extensions
    with as many
    package instances.
    The package (ocgLoaderPkg) contains 2 classes:
    * ocgServlet, a servlet instantiating JSP objects using a class loader.
    * ocgLoader, the class loader itself.
    A single class loader object is created.
    Both the JSP instances and classes are cached in hashtables.
    The invalidation JSP is named jspUpdate.jsp.
    To invalidate an JSP, it has simply to remove object and cache entries from
    the caches.
    ocgServlet
    * Lazily creates the class loader.
    * Retrieves the target JSP instance from the cache, if possible.
    * Otherwise it uses the class loader to retrieve the target JSP class,
    create a target JSP
    instance and stores it in the cache.
    * Forwards the request to the target JSP instance.
    ocgLoader
    * If the requested class has not the extension ocgServlet is configured to
    process, it
    behaves as a regular class loader and forwards the request to the parent
    or system class
    loader.
    * Otherwise, it retrieves the class from the cache, if possible.
    * Otherwise, it loads the class.
    Do you thing it is a good solution?
    I believe that solution is faster than standard WLS one, because it is a
    very small piece of code but too because:
    - my class loader is deterministic, if the file has the right extension I
    don't call the classloader hierarchy first
    - I don't try supporting jars. It has been one of the hardest design
    decision. We definitely need a way to
    update a specific page but at the same time someone post us NT could have
    problems handling
    3000 files in the same directory (it seems he was wrong).
    - I don't try to check if a class has been updated. I have to ask for
    refresh using a JSP now but it could be an EJB.
    - I don't try to check if a source has been updated.
    - As I know the number of JSP I can set pretty accurately the initial
    capacity of the hashtables I use as caches. I
    avoid rehash.
    Cheers - Wei

    I dont know the upper limit, but I think 80 is too much. I have never used more than 15-20. For Nav attributes, a seperate tables are created which causes the Performance issue as result in new join at query run time. Just ask your business guy, if these can be reduced.One way could be to model these attributes as seperate characteristics. It will certainly help.
    Thanks...
    Shambhu

  • Oracle Error 01034 After attempting to delete a large number of rows

    I sent the command to delete a large number of rows from a table in an oracle database (Oracle 10G / Solaris). The database files are located at /dbo partition. Before the command the disk space utilization was at 84% and now it is at 100%.
    SQL Command I ran:
    delete from oss_cell_main where time < '30 jul 2009'
    If I try to connect to the database now I get the following error:
    ORA-01034: ORACLE not available
    df -h returns the following:
    Filesystem size used avail capacity Mounted on
    /dev/md/dsk/d6 4.9G 5.0M 4.9G 1% /db_arch
    /dev/md/dsk/d7 20G 11G 8.1G 59% /db_dump
    /dev/md/dsk/d8 42G 42G 0K 100% /dbo
    I tried to get the space back by deleting all the data in the table oss_cell_main :
    drop table oss_cell_main purge
    But no change in df output.
    I have tried solving it myself but could not find sufficient directed information. Even pointing me to the right documentation will be higly appreciated. I have already looking at the following:
    du -h :
    du -h8K ./lost+found
    1008M ./system/69333
    1008M ./system
    10G ./rollback/69333
    10G ./rollback
    27G ./data/69333
    27G ./data
    1K ./inx/69333
    2K ./inx
    3.8G ./tmp/69333
    3.8G ./tmp
    150M ./redo/69333
    150M ./redo
    42G .
    I think its the rollback folder that has increased in size immensely.
    SQL> show parameter undo
    NAME TYPE VALUE
    undo_management string AUTO
    undo_retention integer 10800
    undo_tablespace string UNDOTBS1
    select * from dba_tablespaces where tablespace_name = 'UNDOTBS1'
    TABLESPACE_NAME BLOCK_SIZE INITIAL_EXTENT NEXT_EXTENT MIN_EXTENTS
    MAX_EXTENTS PCT_INCREASE MIN_EXTLEN STATUS CONTENTS LOGGING FOR EXTENT_MAN
    ALLOCATIO PLU SEGMEN DEF_TAB_ RETENTION BIG
    UNDOTBS1 8192 65536 1
    2147483645 65536 ONLINE UNDO LOGGING NO LOCAL
    SYSTEM NO MANUAL DISABLED NOGUARANTEE NO
    Note: I can reconnect to the database for short periods of time by restarting the database. After some restarts it does connect but for a few minutes only but not long enough to run exp.

    Check the alert log for errors.
    Select file_name, bytes from dba_data_files order by bytes;
    Try to shrink some datafiles to get space back.

  • When pasted large # of values in Choice Prompt,  Stop running this script?

    Hi,
    In a choice prompt, I am cut paste large number of values (about 4000) in 11g dashboard. After a few seconds I get a popup 'Stop running this script?'. The same number of values works very quickly and without prompt in 10g.
    The behavior looks to be due to java script running to set the choice values in 11g. Even if I paste these as ; separated values time taken is very very large.
    Two questions
    (a) Why it takes so long compared to 10 g? If I paste small # of values (say 30) , control comes back with active 'Apply' button quickly.
    (b) What can be done to improve this performance?
    See this posting http://www.itwriting.com/blog/119-ie7-script-madness.html and work around to suppress the popup.
    I tried using presentation variable and that avoids the popup. Why?
    I will like to know what options I have for better user experience.
    Bhupendra

    You need to use Column Filter Prompt for this. One of the main reasons why this is not available in a Dashboard Prompt is that in a Dashboard Page you can have reports using the dashboard prompts and standalone reports(that do not use dashboard prompts). Since there is no link between the dashboard prompts and the individual reports per se, you would not be able to run the reports(that depend on the dashboard prompts) after entering the data in the dashboard prompt. The first way as i said is to use Column filter prompts instead of Dashboard Prompts. The other way to achieve this is, create the dashboard prompt and the report in seperate sections. Then minimize the report section. So, first time you open the dashboard the report would not run in the background. Expand when you actually want to see the report. The third way is to use presentation variables rather than using the is prompted clause. The fourth way is by specifying some default values(which i believe you do not want).
    Thanks,
    Venkat
    http://oraclebizint.wordpress.com

  • Fastest way to handle and store a large number of posts in a very short time?

    I need to handle a very large number of HTTP posts in a very short period of time. The handling will consist of nothing more than storing the data posted and returning a redirect. The data will be quite small (email, postal code). I don't know exactly how
    many posts, but somewhere between 50,000 and 500,000 over the course of a minute.
    My plan is to use the traffic manager to distribute the load across several data centers, and to have a website scaled to 10-instances per data center. For storage, I thought that Azure table storage would be the ideal way to handle this, but I'm not sure
    if the latency would prevent my app from handling this much data.
    Has anyone done anything similar to this and have a suggestion for storing the data? Perhaps buffering everything into memory would be ideal and then batching from there to table storage. I'm starting to load-test the direct to table-storage solution and
    am not encouraged.

    You are talking about a website with 500,000 posts per minute with re-direction, so you are talking about designing a system that can handle at least 500,000 users? Assuming that not all users are doing posts within a one minute timeframe, then you
    are talking about designing a system that can handle millions of users at any one time.
    Event hub architecture is completely different from the HTTP post architecture, every device/user/session writes directly to the hub. I was just wondering if that actually work better for you in your situation.
    Frank
    The site has no session or page displaying. It literally will record a few form values posted from another site and issue a redirect back to that originating site. It is purely for data collection. I'll see if it is possible to write directly to the event hub/service
    bus system from a web page. If so, that might work well.

  • Analyze table after insert a large number of records?

    For performance purpose, is it a good practice to execute an 'analyze table' command after inserting a large number of a records into a table in Oracle 10g, if there is a complex query following the insert?
    For example:
    Insert into foo ...... //Insert one million records to table foo.
    analyze table foo COMPUTE STATISTICS; //analyze table foo
    select * from foo, bar, car...... //Execute a complex query whithout hints
    //after 1 million records inserted into foo
    Does this strategy help to improve the overall performance?
    Thanks.

    Different execution plans will most frequently occur when the ratio of the number of records in various tables involved in the select has changed tremendously. This happens above all if 'fact' tables are growing and 'lookup' tables stayed constant.
    This is why you shouldn't test an application with a small number of 'fact' records.
    This can happen both with analyze table and dbms_stats.
    The advantage of dbms_stats is, it will export the current statistics to a stats to table, so you can always revert to them using dbms_stats.import_stats.
    You can even overrule individual table and column statistics by artificial values.
    Hth
    Sybrand Bakker
    Senior Oracle DBA

Maybe you are looking for

  • Message posted from BPEL not found in oc4j JMS queue

    Hi, I am facing a weird problem when I try to post a message in oc4j JMS from a BPEL process. There is no exception(Not even in the logs) and the BPEL process gets completed. But the message is missing(Could not find it while monitoring the queue). I

  • How do you enable the built-in predictive text feature?

    Built-in predictive text appears in the new features list, but I don't see an obvious way to access it.  Help?

  • Make CAD agents to go ready state upon login

    hi all, hope someone can help. running uccx 8.0.   when agents log into CAD, it always put them in not ready state. is there a way to make them go ready right away upon login. thanks vijay           

  • Hide/Show Javascript

    Hello, I have a page with javascript which works in Firefox but not in Internet Explorer. The functions check to see if an "Other" checkbox is selected. If the box is not selected, then a separate text box should not be shown and vice versa. Here is

  • Adding a "Search Keyword" bar

    Is there just a widget or plugin that allows me to add a search bar to my site?