Too large frames/Outdiscards errors
Hi !
We have a problem with switch connected to an IBM Chassis (model : WS-CBS3012-IBM-I).
Several links seem to flap from time to time and we have dropped packets (Outdiscards) :
XXX#show interfaces counters errors
Port Align-Err FCS-Err Xmit-Err Rcv-Err UnderSize OutDiscards
Gi0/7 0 0 0 0 0 43983
XXX#show controllers ethernet-controller gigabitEthernet 0/7
Transmit GigabitEthernet0/7 Receive
2584788113 Bytes 971769475 Bytes
2128747524 Unicast frames 2142250737 Unicast frames
10109338 Multicast frames 7644 Multicast frames
11694673 Broadcast frames 1660 Broadcast frames
0 Too old frames 991831483 Unicast bytes
0 Deferred frames 726180 Multicast bytes
0 MTU exceeded frames 106240 Broadcast bytes
0 1 collision frames 0 Alignment errors
0 2 collision frames 0 FCS errors
0 3 collision frames 0 Oversize frames
0 4 collision frames 0 Undersize frames
0 5 collision frames 0 Collision fragments
0 6 collision frames
0 7 collision frames 0 Minimum size frames
0 8 collision frames 208582465 65 to 127 byte frames
0 9 collision frames 286273443 128 to 255 byte frames
0 10 collision frames 15408397 256 to 511 byte frames
0 11 collision frames 12313490 512 to 1023 byte frames
0 12 collision frames 10599137 1024 to 1518 byte frames
0 13 collision frames 0 Overrun frames
0 14 collision frames 0 Pause frames
0 15 collision frames
0 Excessive collisions 0 Symbol error frames
0 Late collisions 0 Invalid frames, too large
0 VLAN discard frames 1609083109 Valid frames, too large
0 Excess defer frames 0 Invalid frames, too small
320523 64 byte frames 0 Valid frames, too small
396763728 127 byte frames
6601722 255 byte frames 0 Too old frames
28139208 511 byte frames 0 Valid oversize frames
4064656 1023 byte frames 0 System FCS error frames
285633065 1518 byte frames 0 RxPortFifoFull drop frame
1429028633 Too large frames
0 Good (1 coll) frames
0 Good (>1 coll) frames
The interface is Full-1000 and is configured :
interface GigabitEthernet0/7
description "SRV-001-PRD-VM1 - VMNic0"
switchport trunk allowed vlan 11,12,99
switchport mode trunk
link state group 1 downstream
end
For the "too large frames", I have already read the post : https://supportforums.cisco.com/discussion/11034901/too-large-frames-what-does-mean-performance, so, I think it's not serious.
But I don't know why some packets are dropped because the configuration seems good ...
Can you help me please ?
Thank you in advance !
87752686 Valid frames, too large
This is nothing. You get this counters if the port(s) is/are configured as Dot1Q Trunking.
There is nothing to be worried about.
Similar Messages
-
Hi,
I have 2 2960S configured as a flexstack with a etherchannel containing 4 ports connected to a Windows storage server, where I have configured 4 broadcom nics as a etherchannel. On the Broadcom nic team, I have enabled a mtu of 9000 bytes.
It is working fine, no user complaints, but looking at the 2960 interfaces statistics to the etherchannel, I see <too large frames> and <valid frames, too large> on all 4 interfaces.
1 52 WS-C2960S-48TS-L 15.0(2)SE4 C2960S-UNIVERSALK9-M
2 52 WS-C2960S-48TS-L 15.0(2)SE4 C2960S-UNIVERSALK9-M
System MTU size is 1500 bytes
System Jumbo MTU size is 9198 bytes
System Alternate MTU size is 1500 bytes
Routing MTU size is 1500 bytes
Etherchannel1:
Group Port-channel Protocol Ports
------+-------------+-----------+-----------------------------------------------
1 Po1(SU) LACP Gi1/0/13(P) Gi1/0/14(P) Gi2/0/13(P)
Gi2/0/14(P)
interface GigabitEthernet1/0/13
description PortrChannel to Dell Storage server
switchport access vlan 2
switchport mode access
spanning-tree portfast
channel-group 1 mode active
interface GigabitEthernet1/0/14
description PortrChannel to Dell Storage server
switchport access vlan 2
switchport mode access
spanning-tree portfast
channel-group 1 mode active
interface GigabitEthernet2/0/13
description Portchannel to DELL Storage server
switchport access vlan 2
switchport mode access
spanning-tree portfast
channel-group 1 mode active
interface GigabitEthernet2/0/14
description Portchannel to DELL Storage server
switchport access vlan 2
switchport mode access
spanning-tree portfast
channel-group 1 mode active
sh controllers ethernet-controller gig1/0/13
Transmit: 377631134 Too large frames
Receive: 129713868 Valid frames, too large
sh controllers ethernet-controller gig1/0/14
Transmit: 747469882 Too large frames
Receive: 127341965 Valid frames, too large
sh controllers ethernet-controller gig2/0/13
Transmit: 191980042 Too large frames
Receive: 112521159 Valid frames, too large
sh controllers ethernet-controller gig2/0/14
Transmit: 83379551 Too large frames
Receive: 139446079 Valid frames, too large
Why am I seeing too large frames on those ports when I have enabled jumbo frames on the switch and server?
Regards
RobertThanks.
As far as I see only gig1/0/14 has a pretty big output drops - 186030.
But if the number is since last reboot, then I guess the number is okay?
2960S-Core-stack uptime is 1 year, 18 weeks, 4 days, 1 hour, 34 minutes
2960S-Core-stack#sh int gig1/0/13
GigabitEthernet1/0/13 is up, line protocol is up (connected)
Hardware is Gigabit Ethernet, address is f41f.c2a0.9b0d (bia f41f.c2a0.9b0d)
Description: PortrChannel to Dell Storage server
MTU 9198 bytes, BW 1000000 Kbit/sec, DLY 10 usec,
reliability 255/255, txload 1/255, rxload 1/255
Encapsulation ARPA, loopback not set
Keepalive set (10 sec)
Full-duplex, 1000Mb/s, media type is 10/100/1000BaseTX
input flow-control is off, output flow-control is unsupported
ARP type: ARPA, ARP Timeout 04:00:00
Last input 00:00:30, output 00:00:05, output hang never
Last clearing of "show interface" counters never
Input queue: 0/75/0/0 (size/max/drops/flushes); Total output drops: 1284
Queueing strategy: fifo
Output queue: 0/40 (size/max)
5 minute input rate 0 bits/sec, 0 packets/sec
5 minute output rate 0 bits/sec, 0 packets/sec
2795613655 packets input, 3161835024205 bytes, 0 no buffer
Received 11288132 broadcasts (2752765 multicasts)
0 runts, 0 giants, 0 throttles
0 input errors, 0 CRC, 0 frame, 0 overrun, 0 ignored
0 watchdog, 2752765 multicast, 29914 pause input
0 input packets with dribble condition detected
2189137478 packets output, 4058438953424 bytes, 0 underruns
0 output errors, 0 collisions, 1 interface resets
0 unknown protocol drops
0 babbles, 0 late collision, 0 deferred
0 lost carrier, 0 no carrier, 0 pause output
0 output buffer failures, 0 output buffers swapped out
2960S-Core-stack#sh int gig1/0/14
GigabitEthernet1/0/14 is up, line protocol is up (connected)
Hardware is Gigabit Ethernet, address is f41f.c2a0.9b0e (bia f41f.c2a0.9b0e)
Description: PortrChannel to Dell Storage server
MTU 9198 bytes, BW 1000000 Kbit/sec, DLY 10 usec,
reliability 255/255, txload 1/255, rxload 1/255
Encapsulation ARPA, loopback not set
Keepalive set (10 sec)
Full-duplex, 1000Mb/s, media type is 10/100/1000BaseTX
input flow-control is off, output flow-control is unsupported
ARP type: ARPA, ARP Timeout 04:00:00
Last input 00:00:12, output 00:00:02, output hang never
Last clearing of "show interface" counters never
Input queue: 0/75/0/0 (size/max/drops/flushes); Total output drops: 186030
Queueing strategy: fifo
Output queue: 0/40 (size/max)
5 minute input rate 0 bits/sec, 0 packets/sec
5 minute output rate 1000 bits/sec, 1 packets/sec
3707724536 packets input, 4643627468072 bytes, 0 no buffer
Received 6787158 broadcasts (2657211 multicasts)
0 runts, 0 giants, 0 throttles
0 input errors, 0 CRC, 0 frame, 0 overrun, 0 ignored
0 watchdog, 2657211 multicast, 128960 pause input
0 input packets with dribble condition detected
2960S-Core-stack#sh int gig2/0/13
GigabitEthernet2/0/13 is up, line protocol is up (connected)
Hardware is Gigabit Ethernet, address is 7010.5c69.d58d (bia 7010.5c69.d58d)
Description: Portchannel to DELL Storage server
MTU 9198 bytes, BW 1000000 Kbit/sec, DLY 10 usec,
reliability 255/255, txload 1/255, rxload 1/255
Encapsulation ARPA, loopback not set
Keepalive set (10 sec)
Full-duplex, 1000Mb/s, media type is 10/100/1000BaseTX
input flow-control is off, output flow-control is unsupported
ARP type: ARPA, ARP Timeout 04:00:00
Last input never, output 00:00:58, output hang never
Last clearing of "show interface" counters never
Input queue: 0/75/0/0 (size/max/drops/flushes); Total output drops: 4
Queueing strategy: fifo
Output queue: 0/40 (size/max)
5 minute input rate 0 bits/sec, 0 packets/sec
5 minute output rate 166000 bits/sec, 18 packets/sec
3539049724 packets input, 3829512132321 bytes, 0 no buffer
Received 7243953 broadcasts (2913343 multicasts)
0 runts, 0 giants, 0 throttles
0 input errors, 0 CRC, 0 frame, 0 overrun, 0 ignored
0 watchdog, 2913343 multicast, 3251 pause input
0 input packets with dribble condition detected
2960S-Core-stack#sh int gig2/0/14
GigabitEthernet2/0/14 is up, line protocol is up (connected)
Hardware is Gigabit Ethernet, address is 7010.5c69.d58e (bia 7010.5c69.d58e)
Description: Portchannel to DELL Storage server
MTU 9198 bytes, BW 1000000 Kbit/sec, DLY 10 usec,
reliability 255/255, txload 1/255, rxload 1/255
Encapsulation ARPA, loopback not set
Keepalive set (10 sec)
Full-duplex, 1000Mb/s, media type is 10/100/1000BaseTX
input flow-control is off, output flow-control is unsupported
ARP type: ARPA, ARP Timeout 04:00:00
Last input never, output 00:00:07, output hang never
Last clearing of "show interface" counters never
Input queue: 0/75/0/0 (size/max/drops/flushes); Total output drops: 11046
Queueing strategy: fifo
Output queue: 0/40 (size/max)
5 minute input rate 0 bits/sec, 0 packets/sec
5 minute output rate 0 bits/sec, 0 packets/sec
3520338138 packets input, 4126381780859 bytes, 0 no buffer
Received 7053502 broadcasts (2794515 multicasts)
0 runts, 0 giants, 0 throttles
0 input errors, 0 CRC, 0 frame, 0 overrun, 0 ignored
0 watchdog, 2794515 multicast, 6051 pause input
0 input packets with dribble condition detected
2656057046 packets output, 1761174558434 bytes, 0 underruns
0 output errors, 0 collisions, 1 interface resets
0 unknown protocol drops
0 babbles, 0 late collision, 0 deferred
0 lost carrier, 0 no carrier, 0 pause output
0 output buffer failures, 0 output buffers swapped out -
Inserted value too large for column error while scheduling a job
Hi Everyone,
I am trying to schedule a PL SQL script as a job in my Oracle 10g installed and running on Windows XP.
While trying to Submit the job I get the error as "Inserted value too large for column:" followed by my entire code. The code is correct - complies and runs in Oracle ApEx's SQL Workshop.
The size of my code is 4136 character, 4348 bytes and 107 lines long. It is a code that sends an e-mail and has a +utl_smtp.write_data([Lots of HTML])+
There is no insert statement in the code whatsoever, the code only queries the database for data...
Any idea as to why I might be getting this error??
Thanks in advance
SidThe size of my code is 4136 character, 4348 bytes and 107 lines long. It is a code that sends an e-mail and has a utl_smtp.write_data(Lots of HTML)SQL variable has maximum size of 4000
-
Too large java heap error while starting the domain.Help me please..
I am using weblogic 10.2,after creating the domain, while starting the domain,I am getting this error.Can anyone help me.Please treat this as urgent request..
<Oct 10, 2009 4:09:24 PM> <Info> <NodeManager> <Server output log file is "/nfs/appl/XXXXX/weblogic/XXXXX_admin/servers/XXXXX_admin_server/logs/XXXXX_admin_server.out">
[ERROR] Too large java heap setting.
Try to reduce the Java heap size using -Xmx:<size> (e.g. "-Xmx128m").
You can also try to free low memory by disabling
compressed references, -XXcompressedRefs=false.
Could not create the Java virtual machine.
<Oct 10, 2009 4:09:25 PM> <Debug> <NodeManager> <Waiting for the process to die: 29643>
<Oct 10, 2009 4:09:25 PM> <Info> <NodeManager> <Server failed during startup so will not be restarted>
<Oct 10, 2009 4:09:25 PM> <Debug> <NodeManager> <runMonitor returned, setting finished=true and notifying waiters>Thanks Kevin.
Let me try that.
Already more than 8 domains were successfully created and running fine.Now the newly created domain have this problem.I need 1GB for my domain.Is there any way to do this? -
Data Profiling - Value too large for column error
I am running a data profile which completes with errors. The error being reported is an ORA 12899 Value too large for column actual (41 maximum 40).
I have checked the actual data in the table and the maximum is only 40 characters.
Any ideas on how to solve this. Even though it completes no actual profile is done on the data due to the error.
OWB version 11.2.0.1
Log file below.
Job Rows Selected Rows Inserted Rows Updated Rows Deleted Errors Warnings Start Time Elapsed Time
Profile_1306385940099 2011-05-26 14:59:00.0 106
Data profiling operations complete.
Redundant column analysis for objects complete in 0 s.
Redundant column analysis for objects.
Referential analysis for objects complete in 0.405 s.
Referential analysis for objects.
Referential analysis initialization complete in 8.128 s.
Referential analysis initialization.
Data rule analysis for object TABLE_NAME complete in 0 s.
Data rule analysis for object TABLE_NAME
Functional dependency and unique key discovery for object TABLE_NAME complete in 0 s.
Functional dependency and unique key discovery for object TABLE_NAME
Domain analysis for object TABLE_NAME complete in 0.858 s.
Domain analysis for object TABLE_NAME
Pattern analysis for object TABLE_NAME complete in 0.202 s.
Pattern analysis for object TABLE_NAME
Aggregation and Data Type analysis for object TABLE_NAME complete in 9.236 s.
Aggregation and Data Type analysis for object TABLE_NAME
Functional dependency and unique key discovery for object TABLE_NAME complete in 0 s.
Functional dependency and unique key discovery for object TABLE_NAME
Domain analysis for object TABLE_NAME complete in 0.842 s.
Domain analysis for object TABLE_NAME
Pattern analysis for object TABLE_NAME complete in 0.187 s.
Pattern analysis for object TABLE_NAME
Aggregation and Data Type analysis for object TABLE_NAME complete in 9.501 s.
Aggregation and Data Type analysis for object TABLE_NAME
Functional dependency and unique key discovery for object TABLE_NAME complete in 0 s.
Functional dependency and unique key discovery for object TABLE_NAME
Domain analysis for object TABLE_NAME complete in 0.717 s.
Domain analysis for object TABLE_NAME
Pattern analysis for object TABLE_NAME complete in 0.156 s.
Pattern analysis for object TABLE_NAME
Aggregation and Data Type analysis for object TABLE_NAME complete in 9.906 s.
Aggregation and Data Type analysis for object TABLE_NAME
Functional dependency and unique key discovery for object TABLE_NAME complete in 0 s.
Functional dependency and unique key discovery for object TABLE_NAME
Domain analysis for object TABLE_NAME complete in 0.827 s.
Domain analysis for object TABLE_NAME
Pattern analysis for object TABLE_NAME complete in 0.187 s.
Pattern analysis for object TABLE_NAME
Aggregation and Data Type analysis for object TABLE_NAME complete in 9.172 s.
Aggregation and Data Type analysis for object TABLE_NAME
Functional dependency and unique key discovery for object TABLE_NAME complete in 0 s.
Functional dependency and unique key discovery for object TABLE_NAME
Domain analysis for object TABLE_NAME complete in 0.889 s.
Domain analysis for object TABLE_NAME
Pattern analysis for object TABLE_NAME complete in 0.202 s.
Pattern analysis for object TABLE_NAME
Aggregation and Data Type analysis for object TABLE_NAME complete in 9.313 s.
Aggregation and Data Type analysis for object TABLE_NAME
Execute data prepare map for object TABLE_NAME complete in 9.267 s.
Execute data prepare map for object TABLE_NAME
Execute data prepare map for object TABLE_NAME complete in 10.187 s.
Execute data prepare map for object TABLE_NAME
Execute data prepare map for object TABLE_NAME complete in 8.019 s.
Execute data prepare map for object TABLE_NAME
Execute data prepare map for object TABLE_NAME complete in 5.507 s.
Execute data prepare map for object TABLE_NAME
Execute data prepare map for object TABLE_NAME complete in 10.857 s.
Execute data prepare map for object TABLE_NAME
Parameters
O82647310CF4D425C8AED9AAE_MAP_ProfileLoader 1 2011-05-26 14:59:00.0 11
ORA-12899: value too large for column "SCHEMA"."O90239B0C1105447EB6495C903678"."ITEM_NAME_1" (actual: 41, maximum: 40)
Parameters
O68A16A57F2054A13B8761BDC_MAP_ProfileLoader 1 2011-05-26 14:59:11.0 5
ORA-12899: value too large for column "SCHEMA"."O0D9332A164E649F3B4D05D045521"."ITEM_NAME_1" (actual: 41, maximum: 40)
Parameters
O78AD6B482FC44D8BB7AF8357_MAP_ProfileLoader 1 2011-05-26 14:59:16.0 9
ORA-12899: value too large for column "SCHEMA"."OBF77A8BA8E6847B8AAE4522F98D6"."ITEM_NAME_2" (actual: 41, maximum: 40)
Parameters
OA79DF482D74847CF8EA05807_MAP_ProfileLoader 1 2011-05-26 14:59:25.0 10
ORA-12899: value too large for column "SCHEMA"."OB0052CBCA5784DAD935F9FCF2E28"."ITEM_NAME_1" (actual: 41, maximum: 40)
Parameters
OFFE486BBDB884307B668F670_MAP_ProfileLoader 1 2011-05-26 14:59:35.0 9
ORA-12899: value too large for column "SCHEMA"."O9943284818BB413E867F8DB57A5B"."ITEM_NAME_1" (actual: 42, maximum: 40)
ParametersFound the answer. It was the database character set for multi byte character sets.
-
'Value too large for column' error in msql
In my 9ilite database I have a table with a LONG column
In the documentation it says that a LONG column can hold upto 2 GB data
However when I try an insert more than 4097 characters into the column I get the error message on insert 'Value too large for column'
Is this a bug or is the documentation wrong
or am I doing something wrong ?
Any help would be much appreciatedYou have run into some bug in handling intermediate results in Oracle 9i Lite. You can by pass the bug as follows in Java.
public static oracle.lite.poljdbc.BLOB createBlob(Connection conn,
byte[] data)
throws SQLException, IOException
oracle.lite.poljdbc.BLOB blob = new oracle.lite.poljdbc.BLOB(
(oracle.lite.poljdbc.OracleConnection)conn);
OutputStream writer = blob.getBinaryOutputStream();
writer.write(data);
writer.flush();
writer.close();
return blob;
public static void insertRow(Connection conn, int num,
oracle.lite.poljdbc.BLOB blob)
throws SQLException
PreparedStatement ps = conn.prepareStatement(
"insert into TEST_BLOB values (?, ?)");
ps.setInt(1, num);
ps.setBlob(2, blob);
ps.execute(); -
Update trigger fails with value too large for column error on timestamp
Hello there,
I've got a problem with several update triggers. I've several triggers monitoring a set of tables.
Upon each update the updated data is compared with the current values in the table columns.
If different values are detected the update timestamp is set with the current_timestamp. That
way we have a timestamp that reflects real changes in relevant data. I attached an example for
that kind of trigger below. The triggers on each monitored table only differ in the columns that
are compared.
CREATE OR REPLACE TRIGGER T_ava01_obj_cont
BEFORE UPDATE on ava01_obj_cont
FOR EACH ROW
DECLARE
v_changed boolean := false;
BEGIN
IF NOT v_changed THEN
v_changed := (:old.cr_adv_id IS NULL AND :new.cr_adv_id IS NOT NULL) OR
(:old.cr_adv_id IS NOT NULL AND :new.cr_adv_id IS NULL)OR
(:old.cr_adv_id IS NOT NULL AND :new.cr_adv_id IS NOT NULL AND :old.cr_adv_id != :new.cr_adv_id);
END IF;
IF NOT v_changed THEN
v_changed := (:old.is_euzins_relevant IS NULL AND :new.is_euzins_relevant IS NOT NULL) OR
(:old.is_euzins_relevant IS NOT NULL AND :new.is_euzins_relevant IS NULL)OR
(:old.is_euzins_relevant IS NOT NULL AND :new.is_euzins_relevant IS NOT NULL AND :old.is_euzins_relevant != :new.is_euzins_relevant);
END IF;
[.. more values being compared ..]
IF v_changed THEN
:new.update_ts := current_timestamp;
END IF;
END T_ava01_obj_cont;Really relevant is the statement
:new.update_ts := current_timestamp;So far so good. The problem is, it works the most of time. Only sometimes it fails with the following error:
SQL state [72000]; error code [12899]; ORA-12899: value too large for column "LGT_CLASS_AVALOQ"."AVA01_OBJ_CONT"."UPDATE_TS"
(actual: 28, maximum: 11)
I can't see how the value systimestamp or current_timestamp (I tried both) should be too large for
a column defined as TIMESTAMP(6). We've got tables where more updates occur then elsewhere.
Thats where the most of the errors pop up. Other tables with fewer updates show errors only
sporadicly or even never. I can't see a kind of error pattern. It's like that every 10.000th update
or less failes.
I was desperate enough to try some language dependend transformation like
IF v_changed THEN
l_update_date := systimestamp || '';
select value into l_timestamp_format from nls_database_parameters where parameter = 'NLS_TIMESTAMP_TZ_FORMAT';
:new.update_ts := to_timestamp_tz(l_update_date, l_timestamp_format);
END IF;to be sure the format is right. It didn't change a thing.
We are using Oracle Version 10.2.0.4.0 Production.
Did anyone encounter that kind of behaviour and solve it? I'm now pretty certain that it has to
be an oracle bug. What is the forum's opinion on that? Would you suggest to file a bug report?
Thanks in advance for your help.
Kind regards
JanCould you please edit your post and use formatting and tags. This is pretty much unreadable and the forum boogered up some of your code.
Instructions are here: http://forums.oracle.com/forums/help.jspa -
Inserted value too large for column Error
I have this table:
CREATE TABLE SMt_Session
SessionID int NOT NULL ,
SessionUID char (36) NOT NULL ,
UserID int NOT NULL ,
IPAddress varchar2 (15) NOT NULL ,
Created timestamp NOT NULL ,
Accessed timestamp NOT NULL ,
SessionInfo nclob NULL
and this insert from a sp (sp name is SMsp_SessionCreate):
Now := (SYSDATE);
SessionUID := SYS_GUID();
/*create the session in the session table*/
INSERT INTO SMt_Session
( SessionUID ,
UserID ,
IPAddress ,
Created ,
Accessed )
VALUES ( SMsp_SessionCreate.SessionUID ,
SMsp_SessionCreate.UserID ,
SMsp_SessionCreate.IPAddress ,
SMsp_SessionCreate.Now ,
SMsp_SessionCreate.Now );
It looks like the param SessionUID is the one with trouble, but the length of sys_guid() is 32, and my column has 36.
IPAddress is passed to the sp with value '192.168.11.11', so it should fit.
UserID is 1.
I am confused, what is the column with problem ?CREATE OR REPLACE PROCEDURE SMsp_SessionCreate
PartitionID IN INT ,
UserID IN INT ,
IPAddress IN VARCHAR2 ,
SessionID IN OUT INT,
SessionUID IN OUT CHAR,
UserName IN OUT VARCHAR2,
UserFirst IN OUT VARCHAR2,
UserLast IN OUT VARCHAR2,
SupplierID IN OUT INT,
PartitionName IN OUT VARCHAR2,
Expiration IN INT ,
RCT1 OUT GLOBALPKG.RCT1
AS
Now DATE;
SCOPE_IDENTITY_VARIABLE INT;
BEGIN
Now := SYSDATE;
-- the new Session UID
SessionUID := SYS_GUID();
/*Cleanup any old sessions for this user*/
INSERT INTO SMt_Session_History
( UserID ,
IPAddress ,
Created ,
LastAccessed ,
LoggedOut )
SELECT
UserID,
IPAddress,
Created,
Accessed,
TO_DATE(Accessed + (1/24/60 * SMsp_SessionCreate.Expiration))
FROM SMt_Session
WHERE UserID = SMsp_SessionCreate.UserID;
--delete old
DELETE FROM SMt_Session
WHERE UserID = SMsp_SessionCreate.UserID;
/*create the session in the session table*/
INSERT INTO SMt_Session
( SessionUID ,
UserID ,
IPAddress ,
Created ,
Accessed )
VALUES ( SMsp_SessionCreate.SessionUID ,
SMsp_SessionCreate.UserID ,
SMsp_SessionCreate.IPAddress ,
SMsp_SessionCreate.Now ,
SMsp_SessionCreate.Now );
SELECT SMt_Session_SessionID_SEQ.CURRVAL INTO SMsp_SessionCreate.SessionID FROM dual;
--SELECT SMt_Session_SessionID_SEQ.CURRVAL INTO SCOPE_IDENTITY_VARIABLE FROM DUAL;
--get VALUES to return
SELECT u.AccountName INTO SMsp_SessionCreate.UserName FROM SMt_Users u WHERE u.UserID = SMsp_SessionCreate.UserID;
SELECT u.SupplierID INTO SMsp_SessionCreate.SupplierID FROM SMt_Users u WHERE u.UserID = SMsp_SessionCreate.UserID;
SELECT u.FirstName INTO SMsp_SessionCreate.UserFirst FROM SMt_Users u WHERE u.UserID = SMsp_SessionCreate.UserID;
SELECT u.LastName INTO SMsp_SessionCreate.UserLast FROM SMt_Users u WHERE u.UserID = SMsp_SessionCreate.UserID;
BEGIN
FOR REC IN ( SELECT
u.AccountName,
u.SupplierID,
u.FirstName,
u.LastName FROM SMt_Users u
WHERE UserID = SMsp_SessionCreate.UserID
LOOP
SMsp_SessionCreate.UserName := REC.AccountName;
SMsp_SessionCreate.SupplierID := REC.SupplierID;
SMsp_SessionCreate.UserFirst := REC.FirstName;
SMsp_SessionCreate.UserLast := REC.LastName;
END LOOP;
END;
BEGIN
FOR REC IN ( SELECT PartitionName FROM SMt_Partitions
WHERE PartitionID = SMsp_SessionCreate.PartitionID
LOOP
SMsp_SessionCreate.PartitionName := REC.PartitionName;
END LOOP;
END;
/*retrieve all user roles*/
OPEN RCT1 FOR
SELECT RoleID FROM SMt_UserRoles
WHERE UserID = SMsp_SessionCreate.UserID;
END;
this is the exact code of the sp. The table definition is this:
CREATE TABLE SMt_Session
SessionID int NOT NULL ,
SessionUID char (36) NOT NULL ,
UserID int NOT NULL ,
IPAddress varchar2 (15) NOT NULL ,
Created timestamp NOT NULL ,
Accessed timestamp NOT NULL ,
SessionInfo nclob NULL
The sp gets executed with this params:
PARTITIONID := -2;
USERID := 1;
IPADDRESS := '192.168.11.11';
SESSIONID := -1;
SESSIONUID := NULL;
USERNAME := '';
USERFIRST := '';
USERLAST := '';
SUPPLIERID := -1;
PARTITIONNAME := '';
EXPIRATION := 300;
if I ran the code inside the procedure in sql+ (not the procedure), it works. when i call the sp i get the error
inserted value too large for column
at line 48 -
I purchased Adobe Acrobat Pro XI on line; received email order being processed and order purchased confirmed. When I tried to download the program I received JRun Serviet Error 413, Header Length too Large. Is there a way to resolve this? Is there a customer service number at Adobe?
Get the download from http://helpx.adobe.com/acrobat/kb/acrobat-downloads.html. You will still need your S/N to complete the install.
-
ORA-01401 and ORA-12899 : "too large for column" error messages
I posted this already in the Database - General forum.
Sorry for the double post.
For the past 8hours, I have been googling about
the related ORA-01401 and ORA-12899 error
messages.
I noticed that ORA-01401 had been removed from the list
of error messages in Oracle 10g documentation.
http://download-west.oracle.com/docs/cd/B14117_01/server.101/b10744/e900.htm#sthref30
And, ORA-12899 has been added.
http://download-west.oracle.com/docs/cd/B14117_01/server.101/b10744/e12700.htm
1. Is there an official Oracle document (e.g. release notes) stating such facts? If so, please provide the link.
2. Why did Oracle change the error code and not just update ORA-01401?
3. Someone posted in http://asktom.oracle.com/pls/asktom/f?p=100:11:0::::P11_QUESTION_ID:7143933880166
Even if installed Oracle is 10g, when he tried to "insert a 100 into a number(2).",
he supposedly got ORA-01401. So, is ORA-01401 still part of 10g or not?
I have yet to confirm this myself.
Sorry for being meticulous. Our customer just wants to know the
information for the 2nd question. It's hard to convince them without
showing official documents.
Thanks!1)
Well, go to Metalink and check out Note 119119.1 (B.4) and Note:287754.1
2)
Such details are seldom published. One guess is what they changed altered the reason behind the message so using the same code would be a bad idea.
3)
If you are referring to the occurrence of ora-1401 from a select on a XE db, that may be a bug. -
Hi,
Following the upgrade to iOS 7.0.3 on all our iPhone and iPad devices, it has been identified that when sending emails around 100KB in size and over an error message appears on the device stating “Cannot Send Mail The message was rejected by the server because it is too large.” See error message below. The send/receive limit is over 10MB so this is not the issue.
We are in an Exchange Environment using Microsoft Activesync. This issue is not evident in iOS 6. This has been tested on an iPhone 3GS running version iOS 6.1.3. We have been unable to repeat the issues seen on iOS 7 on the older OS. It is not possible to roll back to the older operating system as Apple are no longer signing the software.
We use Microsoft Active Sync to connect to our Exchange servers through a TMG. The issue is very inconsistent, some identical emails go through, some fail. This is not an issue with the send/receive limit as this is over 10MB. The error message when it fails on the TMG is Status: 413 Request Entity Too Large, which we believe is from IIS on the CAS server.
Does anyone have any suggest course of actions to take?
Many ThanksThis resolution have to attend at the server not with the ios device. My employer's mail administrator reject me to correct it from the server. As his concern is, if ither ios devices works why don't mine? So I am helpless than changing my iphone. It works fine for early versions of ios and with androids. And also one of my friends iphone4 with ios 7 (similar as mine) works too. So I guess it's something wrong with my iPhones settings. But basic question I cannot understand is it works in my phone before this ios7 upgrading. And currently working with my yahoo account too. Favourable reply expected.
-
(413) Request Entity too large intermittent error
I have page in a SharePoint 2013 website which is viewed over https. The page has several input fields including two file upload controls. I am trying to upload a sample picture less than 1MB for each of the controls.
I am calling a BizTalk WCF service on Submit. Sometimes, when I try to submit I get ‘413 Request Entity Too Large'. This error happens intermittently though because if I try submitting the same data a number of times, it fails sometimes and works other times.
The binding settings for the service are set in code (not in Web.Config) as shown below ...
var binding = RetrieveBindingSetting();
var endpoint = RetrieveEndPointAddress(“enpointAddress”);
var proxy = new (binding, endpoint);
proxy.Create(request);
public BasicHttpBinding RetrieveBindingSetting()
var binding = new BasicHttpBinding
MaxBufferPoolSize = 2147483647,
MaxBufferSize = 2147483647,
MaxReceivedMessageSize = 2147483647,
MessageEncoding = WSMessageEncoding.Text,
ReaderQuotas = new System.Xml.XmlDictionaryReaderQuotas
MaxDepth = 2000000,
MaxStringContentLength = 2147483647,
MaxArrayLength = 2147483647,
MaxBytesPerRead = 2147483647,
MaxNameTableCharCount = 2147483647
Security =
Mode = BasicHttpSecurityMode.Transport,
Transport = { ClientCredentialType = HttpClientCredentialType.Certificate }
return binding;
I have also set the uploadReadAheadSize in applicationHost.config file on IIS as by running the command below, as suggested here ...
appcmd.exe set config "sharepoint" -section:system.webserver/serverruntime /uploadreadaheadsize:204800 /commit:apphost
Nothing I do seems to fix this issue so I was wondering if anyone had any ideas?
ThanksSounds like it's not a SharePoint problem, does the page work correctly if you don't call the BizTalk WCF service? And what happens if a console app is calling the WCF service independently of SharePoint, does it fail then as well? In both cases, it would
limit the troubleshooting scope to the WCF service, which gets you a step further.
Kind regards,
Margriet Bruggeman
Lois & Clark IT Services
web site: http://www.loisandclark.eu
blog: http://www.sharepointdragons.com -
Any workaround for Program too large error?
Dear All,
When a program of size 5432 lines compiled(it is having 4 cursor definitions attached to huge select statement from multiple tables), getting an error "PLS-00123 Program too large". Oracele error manual suggests to split the program. Is there any other way to make it work without splitting? (I mean setting some system variables?)
Thanks
SheebaWhat it means by splitting can be achieved by somethingon the lines of changing
PROCEDURE one is
command1;
command2;
command3;
command4;
command5;
command6;
END;
PROCEDURE one is
PROCEDURE onea IS
command1;
command2;
command3;
END
PROCEDURE oneb IS
command4;
command5;
command6;
END
onea;
oneb;
END;
If the problem is the size of the cursors all in the declare section, then you may be able to declare them initially as ref cursors, then having four separate sub procedures opening them so that the text of the cursors are in the subprocedures.
Alternatively, stick the selects into views. -
I bought Adobe Acrobat xi pro and get error message 413 when I try to download
Unable to download Adobe Acrobat XI; order confirmation, get error message 413 Header length too large JRun Serviet Error; how to download program I paid for?Clear your browser cache and restart the download.
-
Cisco cat2960: large frames
Hi All.
I was checking my ports on errors on a subject switch and all was OK. but one ios command -
"show controllers ethernet-controller <my interface>" has shown me that some ports have
"Too large frames" in transmit section and "Valid Frames, too Large" in receive section.
does it mean that my switch drops this frames?I have no stack. My cisco-cat switches are connected through twisted pair (utp5).
Ñ2960[gi0/1]======[gi0/1]Ñ2950[gi0/2]======[gi1/0/15]Ñ3750
====== - it s gigabit trunk connection.
i have checked ports on other switches and it has surprised me: c2950 have no large frames , but c3750 have large frames.
I have set mtu in 9000 only on c2960. i thought that some one who is connected to this switch(PC or other network device) sends large frames. I did not think that large frames can be on other switches.
it seems to me that problem has occured because of ipx traffic in my network and fallback bridging on my c3750.
Maybe you are looking for
-
How can I get data in flat file from Pool table and cluster table ?
Hi, I am working in one Achiving project. My requirement is to get data into flat file from Cluster table and pool table. Is there any tool avilable to download data into flat file from pool table and cluster table ? if table name given in the select
-
Can you overload iTunes w/too much music?
I have over 21000 tracks in my iTunes library, and I'm wondering if there is an upper limit of how many tracks iTunes can handle? My question is not regarding storage - I have an external drive with tons of space. My question is regarding the program
-
Interactive Reporting Reports w/ Page Breaks and Charts
I have an IR report that is Grouped. I would like to page break after each group; however, when I set this up, it page breaks be/tw the report header and the very first group. Is there a way to avoid this issue? Also, I have created charts from queri
-
Need for idempotent stateless session beans
I'm trying to find a solution for failovering method calls on a stateless session bean, even for method calls that already have started. I understand that failover services for failures that occur while the method is in
-
Need help updating camera raw in CS4
I had CS4 installed on a computer that crashed several months ago. I then installed it on my laptop but have not been able to get camera raw to update. When I try to, it says I need to close Bridge and try again, but I don't have Bridge or Photosho