Index size greater than table size
HI ,
While checking the large segments , I came to know that index HZ_PARAM_TAB_N1 is larger than table HZ_PARAM_TAB . I think it's highly fragmented and requires defragmentation . Need your suggestion on the same that how can I collect more information on the same . Providing you more information .
1.
select sum(bytes)/1024/1024/1024,segment_name from dba_segments group by segment_name having sum(bytes)/1024/1024/1024 > 1 order by 1 desc;
SUM(BYTES)/1024/1024/1024 SEGMENT_NAME
81.2941895 HZ_PARAM_TAB_N1
72.1064453 SYS_LOB0000066009C00004$$
52.7703857 HZ_PARAM_TAB
2. Index code
<pre>
COLUMN_NAME COLUMN_POSITION
ITEM_KEY 1
PARAM_NAME 2
</pre>
Regards
Rahul
Hi ,
Thanks . I know that rebuild will defragment it . But as I'm on my new site , I was looking for some more supporting information before drafting the mail on the same that it requires re org activity .It's not possible for an index to have the size greater than tables as it contains only 2 columns values + rowid . Whereas tables contains 6 columns .
<pre>
Name Datatype Length Mandatory Comments
ITEM_KEY VARCHAR2 (240) Yes Unique identifier for the event raised
PARAM_NAME VARCHAR2 (2000) Yes Name of the parameter
PARAM_CHAR VARCHAR2 (4000)
Value of the parameter only if its data type is VARCHAR2.
PARAM_NUM NUMBER
Value of the parameter only if its data type is NUM.
PARAM_DATE DATE
Value of the parameter only if its data type is DATE.
PARAM_INDICATOR VARCHAR2 (3) Yes Indicates if the parameter contains existing, new or >replacement values. OLD values currently exist. NEW values create initial values or replace existing values.</pre>
Regds
Rahul
Similar Messages
-
SAP Table index size is greater than the size of the actual table
Hello Experts,
We are resolving an issue related to database performance. The present database size is 9 Terabytes. The analysis of response times through ST03N shows that the db time is 50% of the total response time. We are planning to reorganize the most updated tables (found from DB02old tx).
Here we see that the size of the index for a table is greater than the actual size of the table. Is this possible, if yes then how can we reorganize the index as it does not allow us to reorganize the index using brspace command.
Hope to hear from you soon, and if any additional activities you can suggest to improve the performance of the database will be appreciated.
Thank youHi Zaheer,
online redef may help you (for a little while) , but also check WHY the index became fragmented.
Improper settings can bring the index fragmented again and you have reoccuring reorg needs.
i.e.
check if PCT_INCREASE >0 if you are using Dictionary Managed Tablespaces or locally managed tablespaces that uses a "User" allocation policy. Set it to 0 to generate uniform next extents in the online reorg.
select
SEGMENT_NAME,
SEGMENT_TYPE,
round((NEXT_EXTENT*BLOCKS)/(EXTENTS*BYTES))*(BYTES/BLOCKS),
PCT_INCREASE
from
DBA_SEGMENTS
where
OWNER='SAPR3'
and
SEGMENT_TYPE in ('INDEX',
'TABLE')
and
PCT_INCREASE > 0
and segment_name in ('Yourtable','Yourindex')
In the following cases, it may be worthwhile to rebuild the index:
--> the percentage of the space used is bad - lower than 66%: PCT_USED
--> deleted leaf blocks represent more than 20% of total leaf blocks: DEL_LF_ROWS
--> the height of the tree is bigger than 3: HEIGHT or BLEVEL
select
name,
'----------------------------------------------------------' headsep,
'height '||to_char(height, '999,999,990') height,
'blocks '||to_char(blocks, '999,999,990') blocks,
'del_lf_rows '||to_char(del_lf_rows,'999,999,990') del_lf_rows,
'del_lf_rows_len '||to_char(del_lf_rows_len,'999,999,990') del_lf_rows_len,
'distinct_keys '||to_char(distinct_keys,'999,999,990') distinct_keys,
'most_repeated_key '||to_char(most_repeated_key,'999,999,990') most_repeated_key,
'btree_space '||to_char(btree_space,'999,999,990') btree_space,
'used_space '||to_char(used_space,'999,999,990') used_space,
'pct_used '||to_char(pct_used,'990') pct_used,
'rows_per_key '||to_char(rows_per_key,'999,999,990') rows_per_key,
'blks_gets_per_access '||to_char(blks_gets_per_access,'999,999,990') blks_gets_per_access,
'lf_rows '||to_char(lf_rows, '999,999,990')||' '||+
'br_rows '||to_char(br_rows, '999,999,990') br_rows,
'lf_blks '||to_char(lf_blks, '999,999,990')||' '||+
'br_blks '||to_char(br_blks, '999,999,990') br_blks,
'lf_rows_len '||to_char(lf_rows_len,'999,999,990')||' '||+
'br_rows_len '||to_char(br_rows_len,'999,999,990') br_rows_len,
'lf_blk_len '||to_char(lf_blk_len, '999,999,990')||' '||+
'br_blk_len '||to_char(br_blk_len, '999,999,990') br_blk_len
from
index_stats where index_name = 'yourindex'
bye
yk -
Index size greated then Table Size
Hi all,
We are running BI7.0 in our environment.
One of the tables' index size is much greated than the table itself. The Details are listed below:
Table Name: RSBERRORLOG
Total Table Size: 141,795,392 KB
Total Index Size: 299,300,576 KB
Index:
F5: Index Size / Allocated Size: 50%
Is there any reason that the index should grow more than Table? If so, would Reorganizing index help and if this can be controlled?
Please letme know on this as I am not very clear on DB much.
Thanks and Regards,
RaghavanHi Hari
Its basically degenerated index. You can follow the below steps
1. Delete some entries from RSBERRORLOG.
BI database growing at 1 Gb per day while no data update on ECC
2. Re-organize this table from BRSPACE . Now the size of the table would be very less. I do not remember if this table has a LONG RAW field ( in that case export /import) of this table would be required. ---Basis job
3. Delete and recreate Index on this table
You will gain lot of space.
I assumed you are on Oracle.
More information on reoganization is LINK: [Reorg|TABLE SPACE REORGANIZATION !! QUICK EXPERT INPUTS;
Anindya
Regards
Anindya -
Index size increases than table size
Hi All,
Let me know what are the possible reasons for index size greater than the table size and in some cases index size smaller than table size . ASAP
Thanks in advance
sheriefhi,
The size of a index depends how inserts and deletes occur.
With sequential indexes, when records are deleted randomly the space will not be reused as all inserts are in the leading leaf block.
When all the records in a leaf blocks have been deleted then leaf block is freed (put on index freelist) for reuse reducing the overall percentage of free space.
This means that if you are deleting aged sequence records at the same rate as you are inserting, then the number of leaf blocks will stay approx constant with a constant low percentage of free space. In this case it is most probably hardly ever worth rebuilding the index.
With records being deleted randomly then, the inefficiency of the index depends on how the index is used.
If numerous full index (or range) scans are being done then it should be re-built to reduce the leaf blocks read. This should be done before it significantly affects the performance of the system.
If index access’s are being done then it only needs to be rebuilt to stop the branch depth increasing or to recover the unused space
here is a exemple how index size can become larger than table size:
Connected to Oracle Database 10g Enterprise Edition Release 10.2.0.3.0
Connected as admin
SQL> create table rich as select rownum c1,'Verde' c2 from all_objects;
Table created
SQL> create index rich_i on rich(c1);
Index created
SQL> select segment_type,bytes,blocks,extents from user_segments where segment_name like 'RICH%';
SEGMENT_TYPE BYTES BLOCKS EXTENTS
TABLE 1179648 144 9
INDEX 1179648 144 9
SQL> delete from rich where mod(c1,2)=0;
29475 rows deleted
SQL> commit;
Commit complete
SQL> select segment_type,bytes,blocks,extents from user_segments where segment_name like 'RICH%';
SEGMENT_TYPE BYTES BLOCKS EXTENTS
TABLE 1179648 144 9
INDEX 1179648 144 9
SQL> insert into rich select rownum+100000, 'qq' from all_objects;
58952 rows inserted
SQL> commit;
Commit complete
SQL> select segment_type,bytes,blocks,extents from user_segments where segment_name like 'RICH%';
SEGMENT_TYPE BYTES BLOCKS EXTENTS
TABLE 1703936 208 13
INDEX 2097152 256 16
SQL> insert into rich select rownum+200000, 'aa' from all_objects;
58952 rows inserted
SQL> select segment_type,bytes,blocks,extents from user_segments where segment_name like 'RICH%';
SEGMENT_TYPE BYTES BLOCKS EXTENTS
TABLE 2752512 336 21
INDEX 3014656 368 23
SQL> delete from rich where mod(c1,2)=0;
58952 rows deleted
SQL> commit;
Commit complete
SQL> select segment_type,bytes,blocks,extents from user_segments where segment_name like 'RICH%';
SEGMENT_TYPE BYTES BLOCKS EXTENTS
TABLE 2752512 336 21
INDEX 3014656 368 23
SQL> insert into rich select rownum+300000, 'hh' from all_objects;
58952 rows inserted
SQL> commit;
Commit complete
SQL> select segment_type,bytes,blocks,extents from user_segments where segment_name like 'RICH%';
SEGMENT_TYPE BYTES BLOCKS EXTENTS
TABLE 3014656 368 23
INDEX 4063232 496 31
SQL> alter index rich_i rebuild;
Index altered
SQL> select segment_type,bytes,blocks,extents from user_segments where segment_name like 'RICH%';
SEGMENT_TYPE BYTES BLOCKS EXTENTS
TABLE 3014656 368 23
INDEX 2752512 336 21
SQL> -
Why Index size is bigger than table size?
Dear All,
I found in my database my tables sizes is coming around 30TB (All Tables in Database). and my index size for the same is 60TB. This is data ware housing environment.
How the index size and table size are differing?
Why they are differing? why index size is bigger than table size?
How to manage the size?
Please give me clear explanation and required information on the above.
Regards
SureshThere are many reasons why the total space allocated indexes could be larger than the total space allocated to tables. Sometimes it's a mark of good design, sometimes it indicates a problem. In your position your first move is to spend as little time as possible in deciding whether your high-level summary is indicative of a problem, so you need to look at a little more detail.
As someone else pointed out - are you looking at the sizes because you are running out of space, or because you have a perceived performance problem. If not, then your question is one of curiosity.
If it's about performance then you should be looking for code (either through statspack/AWR or sql_trace) that is performing badly and use the analysis of that code to help you identify suspect indexes.
If it's about space, then you need to do some simple investigations aimed at finding a few indexes that can be "shrunk" or dropped. Pointers for this are:
select
table_owner, table_name, count(*)
from
dba_indexes
group by
table_owner, table_name
having
count(*) > 2 -- adjust to keep the output short
order by
count(*) desc;This tells you which tables have the most indexes - check the sizes of the tables and indexes and then check the index definitions for the larger tables with lots of indexes.
Second quick check - join dba_tables to dba_indexes by table_name, and report the table blocks and index leaf blocks in desending order of leaf block count. Look for indexes which are very big, and also bigger than their underlying tables. There are special cases (and bugs) that can cause indexes to be much bigger than they need to be ... this report may identify a couple of anomalies that could benefit from an emergency fix followed (possibly) by a strategic fix.
Regards
Jonathan Lewis -
Index size (row_nums) is bigger than the tables row
Hi everyone,
I'm encountering some strange problems with the CBO in Oracle 10.2.0.3 - it's telling me that I have more rows in the indexes than there are rows in the tables.
I've tried all combinations of dbms_stats and analyse and cannot understand how the CBO comes up with such numbers. I've even done a "delete statistics" and
Re-analysed the table and indexes but it doesn't help.
The command I used is variations of the following:
exec
DBMS_STATS.GATHER_TABLE_STATS(ownname=>'MBS',tabname=>'READINGTOU', -
estimate_percent=>dbms_stats.auto_sample_size,method_opt=>'FOR COLUMNS PROCESSSTATUS',degree=>2);
EVEN TRIED
exec sys.dbms_utility.analyze_schema('MBS','ESTIMATE', estimate_percent => 15);
I've even used estimate_percent of 50 and still getting lower numbers for the table.
Initially I was afraid that since the index is larger than the table, the index would never be used. So the question is, does it really matter that the indexes' num_rows is bigger than the tables' num_rows? What is the consequence of this? And how do I get the optimizer to correct the differences in the stats. The table is 30G in size and growing, so a COMPUTE is out of the question.
but have the same problem in dev..and i did the COMPUTE in dev...get the same thing... I have more rows in the indexes than there are rows in the tables
Edited by: user630084 on Mar 11, 2009 10:45 AMIs your issue that you are having problems with the execution plans of queries referencing these objects? Or is your problem that you are observing more num_rows in the index than in the table when you query the data dictionary?
If it's the latter then there's really no concern (unless the estimates are insanely inconsistent). The statistics are estimates and as such, will not be 100% accurate, though they should do a reasonable job of representing the data in your system (when they don't, then you have an issue, but we've seen nothing to indicate that as of yet). -
Index size bigger than table name? why?
I have a table student_enrollwment_item_tbl with primary key "pk_stu_enroll_item" - STU_ENROLL_ID, TASK_ID, PART_ID, ITEM_ID.
Table structure is as following:
Name Null? Type
STU_ENROLL_ID NOT NULL NUMBER
ITEM_ID NOT NULL VARCHAR2(15)
PART_ID NOT NULL NUMBER(2)
TASK_ID NOT NULL VARCHAR2(10)
QUESTION_NO NOT NULL VARCHAR2(25)
FLASH_NO NOT NULL NUMBER(3)
ITEM_NO NUMBER(3)
The table is 1856 MB in size, while the index is 2730 MB in size. I am surprised since 'size of index > size of table'. Why will it happen?1) As seen from the result of the following sql, the PCT_FREE is 10. It's not bad.
select index_name, table_name, ini_trans, max_trans, initial_extent, min_extents, max_extents,
freelists, freelist_groups, pct_free, leaf_blocks from all_indexes
where table_name = 'STUDENT_ENROLLMENT_ITEM_TBL';
INDEX_NAME TABLE_NAME INI_TRANS MAX_TRANS INITIAL_EXTENT MIN_EXTENTS MAX_EXTENTS FREELISTS FREELIST_GROUPS PCT_FREE LEAF_BLOCKS
pk_stu_enroll_item STUDENT_ENROLLMENT_ITEM_TBL 2 255 379125760 1 2147483645 1 1 10 323428
2) The pattern is like this:
I regards it as being not sequential, but with a lot of distinct values.
STU_ENROLL_ID ITEM_ID PART_ID TASK_ID QUESTION_NO FLASH_NO ITEM_NO
10005085 C31001008 1 C310010 8 9 8
10005085 C31001009 1 C310010 9 10 9
10005085 C31001010 1 C310010 10 11 10
10005086 0 0 C310010 0 0 0
10005086 0 1 C310010 0 1 0
10005086 C31001001 1 C310010 1 2 1
10005086 C31001002 1 C310010 2 3 2
10005086 C31001003 1 C310010 3 4 3
10005086 C31001004 1 C310010 4 5 4
10005086 C31001005 1 C310010 5 6 5
10005086 C31001006 1 C310010 6 7 6
10005086 C31001007 1 C310010 7 8 7
10005086 C31001008 1 C310010 8 9 8
10005086 C31001009 1 C310010 9 10 9
10005086 C31001010 1 C310010 10 11 10
10005055 C31001005 1 C310010 5 6 5
10005055 C31001006 1 C310010 6 7 6
10005055 C31001007 1 C310010 7 8 7
10005055 C31001008 1 C310010 8 9 8
3) Not many deletes have been ran in the table as I know.
I still cannot figure out the reason. Please help. Thanks. -
Getting same index size despite different table size
Hello,
this question arose from a different thread, but touches a different problem, which is why I have decided to post it as a separate thread.
I have several tables of 3D points.
The points roughly describe the same area but in different densities, which means the tables are of different sizes. The smallest contains around 3million entries and the largest around 37 million entries.
I applied an index with
CREATE INDEX <index name>
ON <table name>(<column name>)
INDEXTYPE is MDSYS.SPATIAL_INDEX
PARAMETERS('sdo_indx_dims=3');
My problem is that I am trying to see how much space the index occupies for each table.
I used the following syntax to get the answer to this:
SELECT usim.sdo_index_name segment_name, bytes/1024/1024 segment_size_mb
FROM user_segments us, user_sdo_index_metadata usim
WHERE usim.SDO_INDEX_NAME = <spatial index name>
AND us.segment_name = usim.SDO_INDEX_TABLE;
(thanks Reggie for supplying the sql)
Now, the curious thing is that in all cases, I get the answer
SEGMENT_NAME SEGMENT_SIZE_MB
LIDAR_POINTS109_IDX .0625
(obviously with a different sement name in each case).
I tried to see what an estimated index size would be with
SDO_TUNE.ESTIMATE_RTREE_INDEX_SIZE
And I get estimates ranging from 230MB in the case of 3million records up to 2.9 for the case of 37million records.
Does anyone have an idea why I am not getting a different actual index size for the different tables?
Any help is greatly appreciated!!!
Cheers,
F.It looks like your indexes didn't actually create properly. Spatial indexes are a bit different to 'normal' indexes in this regard. A BTree index will either create or not. However, when creating a spatial index, something may fail, but the index structure will remain and it will appear to be valid according to the data dictionary.
Consider the following example in which the SRID has a problem:
SQL> CREATE TABLE INDEX_TEST (
2 ID NUMBER PRIMARY KEY,
3 GEOMETRY SDO_GEOMETRY);
Table created.
SQL>
SQL> INSERT INTO INDEX_TEST (ID, GEOMETRY) VALUES (1,
2 SDO_GEOMETRY(2001, 99999, SDO_POINT_TYPE(569278.141, 836920.735, NULL), NULL, NULL)
3
SQL> INSERT INTO user_sdo_geom_metadata VALUES ('INDEX_TEST','GEOMETRY',
2 MDSYS.SDO_DIM_ARRAY(
3 MDSYS.SDO_DIM_ELEMENT('X',0, 1000, 0.0005),
4 MDSYS.SDO_DIM_ELEMENT('Y',0, 1000, 0.0005)
5 ), 88888);
1 row created.
SQL>
SQL> CREATE INDEX INDEX_TEST_SPIND ON INDEX_TEST(GEOMETRY) INDEXTYPE IS MDSYS.SPATIAL_INDEX;
CREATE INDEX INDEX_TEST_SPIND ON INDEX_TEST(GEOMETRY) INDEXTYPE IS MDSYS.SPATIAL_INDEX
ERROR at line 1:
ORA-29855: error occurred in the execution of ODCIINDEXCREATE routine
ORA-13249: SRID 88888 does not exist in MDSYS.CS_SRS table
ORA-29400: data cartridge error
Error - OCI_NODATA
ORA-06512: at "MDSYS.SDO_INDEX_METHOD_10I", line 10
SQL> SELECT usim.sdo_index_name segment_name, bytes/1024/1024 segment_size_mb,
2 usim.sdo_index_status
2 FROM user_segments us, user_sdo_index_metadata usim
3 WHERE usim.SDO_INDEX_NAME = 'INDEX_TEST_SPIND'
4 AND us.segment_name = usim.SDO_INDEX_TABLE;
SEGMENT_NAME SEGMENT_SIZE_MB SDO_INDEX_STATUS
INDEX_TEST_SPIND .0625 VALID
1 row selected.
SQL>When you ran the CREATE INDEX statement did it say "Index created." afterwards or did you get an error?
Did you run the CREATE INDEX statement in SQL*Plus yourself or was it run by some software?
I suggest you drop the indexes and try creating them again. Watch out for any errors. Chances are its an SRID issue. -
Passing variable of size greater than 32767 from Pro*C to PL/SQL procedure
Hi,
I am trying to pass a variable os size greater than 32767 from Pro*C to an SQL procedure.I tried assigning the host variable directly to a CLOB in the SQL section but nothing happens.In the below code the size of l_var1 is 33000.PROC_DATA is a procedure that takes CLOB as input and gives the other three(Data,Err_Code,Err_Msg) as output.These variables are declared globally.
Process_Data(char* l_var1)
EXEC SQL EXECUTE
DECLARE
l_clob clob;
BEGIN
l_clob := :l_var1
PROC_DATA(l_clob,:Data,:Err_Code,:Err_Msg) ;
COMMIT;
END;
END-EXEC;
I also tried using DBMS_LOB.This was the code that i used.
Process_Data(char* l_var1)
EXEC SQL EXECUTE
DECLARE
l_clob clob;
BEGIN
DBMS_LOB.CREATETEMPORARY(l_clob,TRUE);
DBMS_LOB.OPEN(l_clob,dbms_lob.lob_readwrite);
DBMS_LOB.WRITE (l_clob, LENGTH (:l_var1), 1,:l_var1);
PROC_DATA(l_clob,:Data,:Err_Code,:Err_Msg) ;
COMMIT;
END;
END-EXEC;
Here since DBMS_LOB packages allow a maximum of 32767,the value of l_var1 is not being assigned to l_clob.
I am able to do the above process provided i split l_var1 into two variables and then append to l_clob using WRITEAPPEND.i.e l_var1 is 32000 in length and l_var2 contains the rest.
Process_Data(char* l_var1,char* l_var2)
EXEC SQL EXECUTE
DECLARE
l_clob clob;
BEGIN
dbms_lob.createtemporary(l_clob,TRUE);
dbms_lob.OPEN(l_clob,dbms_lob.lob_readwrite);
DBMS_LOB.WRITE (l_clob, LENGTH (:l_var1), 1,:l_var1);
DBMS_LOB.WRITEAPPEND (l_clob, LENGTH(:l_var2), :l_var2);
PROC_DATA(l_clob,:Data,:Err_Code,:Err_Msg) ;
COMMIT;
END;
END-EXEC;
But the above code requires dynamic memory allocation in Pro*C which i would like to avoid.Could you let me know if there is any other way to perform the above?Hi,
The Long Datatype has been deprecated use Clob or Blob. This will solve lot of problems inherent with the datatype.
Regards,
Ganesh R -
Report to find all table and index sizes
Hi all,
Good day..
Is there any report.sql or so to find out the sizes of all the tables and indexes in a database.
thanks,
baskar.l1.To get table size
What will be the table size if?
<or>
break on report
set line 200
COMPUTE SUM LABEL "Total Reclaimable Space" OF "KB Free Space" ON REPORT
column "Table Size" Format a20
column "Actual Data Size" Format a20
column "KB Free Space" Format "9,99,999.99"
select table_name,
round((blocks*8),2)||'kb' "Table size",
round((num_rows*avg_row_len/1024),2)||'kb' "Actual Data size",
pct_free,
round((blocks*8),2) - (round((blocks*8),2)*pct_free/100) - (round((num_rows*avg_row_len/1024),2)) "KB Free Space"
from user_tables
where round((blocks*8),2) - (round((blocks*8),2)*pct_free/100) - (round((num_rows*avg_row_len/1024),2)) > 0
order by round((blocks*8),2) - (round((blocks*8),2)*pct_free/100) - (round((num_rows*avg_row_len/1024),2)) desc
2.To get index size
How to size the Index
Hth
Girish Sharma -
How i can fetch the template greater than 32000 k size into reach text editor
how i can fetch the template greater than 32000 k size into reach text editor
Would this help you?
- Dynamic Action Plugin - Enkitec CLOB Load -
Parse an XML of size greater than 64k using DOM
Hi,
I had a question regarding limitation of parsing a file of size greater than 64k in Oracle 10g. Is the error "ORA-31167: XML nodes over 64K in size cannot be inserted" related to this ?
One of the developers was telling that if we load an XML document of size greater than 64k into Oracle DOM, it will fail. Is 64k the size of the file or the size of text node in the XML?
Is there a way we can overcome this limitation?
I believe that Oracle 11g R1 documentation states that existing 64k limitation on the size of a text node has been eliminated. So if we use Oracle 11g, does it mean we can load XML files of size greater than 64K (or XML having text nodes of size greater than 64k)
I am not well versed with XML. Please help me out.
Thanks for your help.Search this forum for the ORA-error.
Among others it will show the following: Node size
In this case I think we can assured that "a future release" in 2006 was 11.1 as mentioned by Mark (= Sr Product Manager Oracle XML DB) -
PUT Blobs of size greater than 5.5MB fail with HTTPS but not HTTP
I have written a Cygwin app that uploads (using the REST API PUT operation) Block Blobs to my Azure storage account, and it works well for different size blobs when using HTTP. However, use of SSL (i.e. PUT using HTTPS) fails for Blobs greater than 5.5MB.
Blobs less than 5.5MB upload correctly. Anything greater and I find that the TCP session (as seen by Wireshark) reports a dwindling window size that goes to 0 once the aforementioned number of bytes have been transferred. The failure is very repeatable and
consistent. As a point of reference, PUT operations against my Google/AWS/HP accounts work fine when using HTTPS for various object sizes, which suggests my problem is not in my client but specific to the HTTPS implementation on the MSAZURE storage servers.
If I upload the 5.5MB blob as two separate uploads of 4MB and 1.5MB followed by a PUT Block List, the operation succeeds as long as the two uploads used
separate HTTPS sessions. Notice the emphasis on separate. That same operation fails if I attempt to maintain an HTTPS session across both uploads. This is another data point that seems to suggest that the Storage
server has a problem
Any ideas on why I might be seeing this odd behavior that appears very specific to MS Azure HTTPS, but is not seen when used against AWS/Google/HP cloud storage servers?Hi,
I'm getting this problem also when trying to upload blobs > 5.5mb using the Azure PHP SDK with HTTPS.
There is no way I can find to get a blob > 5.5mb to upload, unless you use http, rather than https, which is not a good solution.
I've written my own scripts to use the HTTP_Request2 library, to send the request as a test, and it fails with that also when using the 'socket' method.
However, if I write a script using the PHP Curl extension directly, then it works fine, and blobs > 5.5mb get uploaded.
It seems to be irrelevant which method is used, uploading in 1 go, or using smaller chunks, the PHP SDK seems broken.
Also, I think I've found another bug in the SDK, when you do the smaller chunks, the assigning of the BlockID is not correct.
In: WindowsAzure/Blob/BlobRestProxy.php
Line: $block->setBlockId(base64_encode(str_pad($counter++, '0', 6)));
That is incorrect usage of the str_pad function, and if you upload a huge blob that needs splitting, then the blockIDs will after a while become a different length and therefore fail.
It should be: str_pad($counter++, 6, '0',STR_PAD_LEFT);
I also think there is 1 too many base64_encodes() in there, as I think its being done twice, once in that line, and then again within the createBlobBlock() just before the send() for a 2nd time.
Can someone please advice, when this/these bug(s) will be fixed in the PHP SDK, as at the moment its useless to me as I cant upload things securely. -
Java.exe sizes greater than 350M , web report often error
HI , friends
My ie is 8,and webi4.0.
the web report file(universe) has 63 reports,hundreds formulas,
open the report java.exe sizes greater than 350M.
every time edit report ,only edit fews formulas....then the edit does not work.
and edit Data Access,or refresh ,then error: An error has occured.....(Screenshot)
only to log off ,and shut down IE ...
After a while open the IE, Sign in web report... ...again...
I set the RAM as a virtual hard disk,and set up IE explorer buffer memory to the NEW hard disk,
but error still exists.
please help me , thanks.Hi,
On Windows 7, you may set the Java maximum Java heap size to 1536 MB in Java Control Panel -> Java -> Java Runtime Environment Settings, Runtime paramaters for both User and System.
-Xmx1536m -Xincgc
Note that
depending on the desktop OS, the maximum Java heap size could vary, you'd need to test it and find out the ceiling to that OS.
-Xincgc is to enable incremental garbage collection instead of waiting for whole lot chunk of garbage to be collected.
Hope this helps,
Jin-Chong -
Merging files greater than 100MB in size
How do I merge multiple pdf files greater than 100MB in size?
However...the talk of 100MB suggests you don't actually have Acrobat. This is indeed a fixed limit if you are a subscriber to PDF Pack (CreatePDF), and the way around it is to get Acrobat.
Maybe you are looking for
-
How can I get the middle mouse button to work with the toolbar button?
Yesterday firefox stopped responding to middle-mouse clicks made in the bookmarks toolbar. Middle mouse buttons still work in web pages.
-
IPhoto and External Hard Drive ?
I am going to move my iphoto library to an external hard drive so that I can have more space on my internal drive. I travel for work and take hunderds of pictures during each trip. While I am away I want to be able to put the pictures on my laptop th
-
Sudden thin white line acroos the top of my screen after software update. Can NPRAM reset help to solve or is it permanent pixel problem?
-
Problems with Deskjet 3050A connecting to the computer (Mac)
Hi, I have recently moved to university and have bought my new printer with me (which I had set up at home). But I am having real problems connecting it to the computer. It doesn't accept the wireless connection because it is secure as it is from the
-
Looking for a suitable answer