On BFILE usage
Hi,
Any help.. documents.. tutorial....samples... on BFILEDOMAIN usage.
Ravishankar Swamy
G'day All,
Just an update, I've sucessfully modified the APEX 3.0 DEMO application to use BFILE instead of BLOB for DEMO_IMAGES. I've now got a good set of PROCEDURES and FUCTIONS to work with BFILE especially an equivalent CUSTOM_IMAGE_DISPLAY for show BFILE images on web pages etc. A few lessons learnt, but one of the big ones that cause a small flap after all my hard work, was when I ran a test and uploaded 10 images of 6 mb each the APEX SYSAUX table space still increased by 65mb even though I had deleted the BLOB from the work flow table after storing the a as BFILE.
After doing some research I located this webiste (http://halisway.blogspot.com/2007/06/reclaiming-lob-space-in-oracle.html) which explains that even though we delete records with BLOBs the LOB space is not automatically reclaimed so as the SYS user ran the following command and then rechecked the SYSAUX tablespace and all was good.
NB: The bloody table that is used to store the uploaded files is located under FLOWS_FILES so make sure your attached to that schema else alter table complains thats it can not see table / view.
alter session set current_schema=FLOWS_FILES;
alter table wwv_flow_file_objects$ modify lob (blob_content) (shrink space);
So now Oracle XE shouldn't run out tablespace too quickly now, as long as I run the above maintenance commands to reclaim space.
One happy chappy.
Mark
Similar Messages
-
Hi,
I have a Oracle 9.2 DB with 2 cpu's,when i took a statspack report i found that CPU Time is very hight i,e more that 81%....
one of the query is taking near about 51% CPU Usage...
i.e
SELECT /*+ INDEX (OM_COURSE_FEE OCF_CFC_CODE) */DISTINCT CF_COURSE_CODE FROM OM_COURSE_FEE,OT_STUDENT_FEE_COL_HEAD,OT_STUDENT_FEE_COL_DETL WHERE SFCH_SYS_ID = SFCD_SFCH_SYS_ID AND
SFCD_FEE_TYPE_CODE = CF_TYPE_CODE AND CF_COURSE_CODE IN ( 'PE1','PE2','CCT','CPT' ) AND SFCH_TXN_CODE = :b1 AND SFCD_SUBSCRBE_JOURNAL_YN IN ( 'T','R','1','C' ) AND SFCH_APPR_UID IS NOT NULL
AND SFCH_APPR_DT IS NOT NULL AND SFCH_NO = :b2 AND NOT EXISTS (SELECT 'X' FROM OM_STUDENT_HEAD WHERE STUD_SRN = SFCH_STUD_SRN_TEMP_NO AND NVL(STUD_CLO_STATUS,0) != 1 AND
NVL(STUD_REG_STATUS,0) != 23 AND STUD_COURSE_CODE != 'CCT' AND CF_COURSE_CODE= STUD_COURSE_CODE ) ORDER BY 1 DESCExplain Plan for......
SQL> select * from table(dbms_xplan.display());
PLAN_TABLE_OUTPUT
| Id | Operation | Name
| Rows | Bytes | Cost | Pstart| Pstop |
| 0 | SELECT STATEMENT |
PLAN_TABLE_OUTPUT
| 1 | 69 | 45 | | |
| 1 | SORT UNIQUE |
| 1 | 69 | 27 | | |
| 2 | TABLE ACCESS BY GLOBAL INDEX ROWID | OT_STUDENT_FEE_COL_DETL
| 1 | 12 | 2 | ROWID | ROW L |
| 3 | NESTED LOOPS |
| 1 | 69 | 9 | | |
PLAN_TABLE_OUTPUT
| 4 | NESTED LOOPS |
| 1 | 57 | 7 | | |
| 5 | TABLE ACCESS BY GLOBAL INDEX ROWID | OT_STUDENT_FEE_COL_HEAD
| 1 | 48 | 5 | ROWID | ROW L |
| 6 | INDEX SKIP SCAN | OT_STUDENT_FEE_COL_HEAD_UK0
1 | 1 | | 4 | | |
| 7 | INLIST ITERATOR |
| | | | | |
PLAN_TABLE_OUTPUT
| 8 | TABLE ACCESS BY INDEX ROWID | OM_COURSE_FEE
| 1 | 9 | 2 | | |
| 9 | INDEX RANGE SCAN | OCF_CFC_CODE
| 1 | | 1 | | |
| 10 | FILTER |
| | | | | |
| 11 | TABLE ACCESS BY GLOBAL INDEX ROWID| OM_STUDENT_HEAD
PLAN_TABLE_OUTPUT
| 1 | 21 | 4 | ROWID | ROW L |
| 12 | INDEX RANGE SCAN | IDM_STUD_SRN_COURSE
| 1 | | 3 | | |
| 13 | INDEX RANGE SCAN | IDM_SFCD_FEE_TYPE_CODE
| 34600 | | 1 | | |
PLAN_TABLE_OUTPUT
Note: cpu costing is off, PLAN_TABLE' is old version
21 rows selected.
SQL>Statspack report
DB Name DB Id Instance Inst Num Release Cluster Host
ai 1372079993 ai11 1 9.2.0.6.0 YES ai1
Snap Id Snap Time Sessions Curs/Sess Comment
Begin Snap: 175 12-Dec-08 13:21:33 ####### .0
End Snap: 176 12-Dec-08 13:56:09 ####### .0
Elapsed: 34.60 (mins)
Cache Sizes (end)
~~~~~~~~~~~~~~~~~
Buffer Cache: 3,264M Std Block Size: 8K
Shared Pool Size: 608M Log Buffer: 977K
Load Profile
~~~~~~~~~~~~ Per Second Per Transaction
Redo size: 5,727.62 21,658.54
Logical reads: 16,484.89 62,336.32
Block changes: 32.49 122.88
Physical reads: 200.46 758.03
Physical writes: 5.08 19.23
User calls: 97.43 368.44
Parses: 11.66 44.11
Hard parses: 0.39 1.48
Sorts: 3.22 12.19
Logons: 0.02 0.06
Executes: 36.70 138.77
Transactions: 0.26
% Blocks changed per Read: 0.20 Recursive Call %: 28.65
Rollback per transaction %: 20.95 Rows per Sort: 131.16
Instance Efficiency Percentages (Target 100%)
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Buffer Nowait %: 100.00 Redo NoWait %: 100.00
Buffer Hit %: 98.79 In-memory Sort %: 99.99
Library Hit %: 98.92 Soft Parse %: 96.65
Execute to Parse %: 68.21 Latch Hit %: 99.98
Parse CPU to Parse Elapsd %: 60.50 % Non-Parse CPU: 99.48
Shared Pool Statistics Begin End
Memory Usage %: 90.06 89.79
% SQL with executions>1: 72.46 72.46
% Memory for SQL w/exec>1: 69.42 69.51
Top 5 Timed Events
~~~~~~~~~~~~~~~~~~ % Total
Event Waits Time (s) Ela Time
CPU time 3,337 81.43
db file sequential read 60,550 300 7.32
global cache cr request 130,852 177 4.33
db file scattered read 72,915 101 2.46
db file parallel read 3,384 75 1.84
Cluster Statistics for DB: ai Instance: ai11 Snaps: 175 -176
Global Cache Service - Workload Characteristics
Ave global cache get time (ms): 1.3
Ave global cache convert time (ms): 2.1
Ave build time for CR block (ms): 0.1
Ave flush time for CR block (ms): 0.3
Ave send time for CR block (ms): 0.3
Ave time to process CR block request (ms): 0.7
Ave receive time for CR block (ms): 4.9
Ave pin time for current block (ms): 0.0
Ave flush time for current block (ms): 0.0
Ave send time for current block (ms): 0.3
Ave time to process current block request (ms): 0.3
Ave receive time for current block (ms): 2.8
Global cache hit ratio: 1.5
Ratio of current block defers: 0.0
% of messages sent for buffer gets: 1.4
% of remote buffer gets: 0.1
Ratio of I/O for coherence: 1.1
Ratio of local vs remote work: 9.7
Ratio of fusion vs physical writes: 0.1
Global Enqueue Service Statistics
Ave global lock get time (ms): 0.8
Ave global lock convert time (ms): 0.0
Ratio of global lock gets vs global lock releases: 1.1
GCS and GES Messaging statistics
Ave message sent queue time (ms): 0.4
Ave message sent queue time on ksxp (ms): 2.7
Ave message received queue time (ms): 0.2
Ave GCS message process time (ms): 0.1
Ave GES message process time (ms): 0.1
% of direct sent messages: 19.4
% of indirect sent messages: 43.5
% of flow controlled messages: 37.1
GES Statistics for DB: ai Instance: ai11 Snaps: 175 -176
Statistic Total per Second per Trans
dynamically allocated gcs resourc 0 0.0 0.0
dynamically allocated gcs shadows 0 0.0 0.0
flow control messages received 0 0.0 0.0
flow control messages sent 0 0.0 0.0
gcs ast xid 0 0.0 0.0
gcs blocked converts 1,231 0.6 2.2
gcs blocked cr converts 2,432 1.2 4.4
gcs compatible basts 0 0.0 0.0
gcs compatible cr basts (global) 658 0.3 1.2
gcs compatible cr basts (local) 57,822 27.9 105.3
gcs cr basts to PIs 0 0.0 0.0
gcs cr serve without current lock 0 0.0 0.0
gcs error msgs 0 0.0 0.0
gcs flush pi msgs 821 0.4 1.5
gcs forward cr to pinged instance 0 0.0 0.0
gcs immediate (compatible) conver 448 0.2 0.8
gcs immediate (null) converts 1,114 0.5 2.0
gcs immediate cr (compatible) con 42,094 20.3 76.7
gcs immediate cr (null) converts 396,284 190.9 721.8
gcs msgs process time(ms) 42,220 20.3 76.9
gcs msgs received 545,133 262.6 993.0
gcs out-of-order msgs 5 0.0 0.0
gcs pings refused 1 0.0 0.0
gcs queued converts 0 0.0 0.0
gcs recovery claim msgs 0 0.0 0.0
gcs refuse xid 0 0.0 0.0
gcs retry convert request 0 0.0 0.0
gcs side channel msgs actual 2,397 1.2 4.4
gcs side channel msgs logical 232,024 111.8 422.6
gcs write notification msgs 15 0.0 0.0
gcs write request msgs 278 0.1 0.5
gcs writes refused 1 0.0 0.0
ges msgs process time(ms) 4,873 2.3 8.9
ges msgs received 39,769 19.2 72.4
global posts dropped 0 0.0 0.0
global posts queue time 0 0.0 0.0
global posts queued 0 0.0 0.0
global posts requested 0 0.0 0.0
global posts sent 0 0.0 0.0
implicit batch messages received 39,098 18.8 71.2
implicit batch messages sent 33,386 16.1 60.8
lmd msg send time(ms) 635 0.3 1.2
lms(s) msg send time(ms) 2 0.0 0.0
messages flow controlled 196,546 94.7 358.0
messages received actual 182,783 88.0 332.9
messages received logical 584,848 281.7 1,065.3
messages sent directly 102,657 49.4 187.0
messages sent indirectly 230,329 110.9 419.5
msgs causing lmd to send msgs 9,169 4.4 16.7
msgs causing lms(s) to send msgs 3,347 1.6 6.1
msgs received queue time (ms) 142,759 68.8 260.0
msgs received queued 584,818 281.7 1,065.2
msgs sent queue time (ms) 99,300 47.8 180.9
msgs sent queue time on ksxp (ms) 608,239 293.0 1,107.9
msgs sent queued 230,391 111.0 419.7
msgs sent queued on ksxp 229,013 110.3 417.1
GES Statistics for DB: ai Instance: ai11 Snaps: 175 -176
Statistic Total per Second per Trans
process batch messages received 65,059 31.3 118.5
process batch messages sent 50,959 24.5 92.8
Wait Events for DB: ai Instance: ai11 Snaps: 175 -176
-> s - second
-> cs - centisecond - 100th of a second
-> ms - millisecond - 1000th of a second
-> us - microsecond - 1000000th of a second
-> ordered by wait time desc, waits desc (idle events last)
Avg
Total Wait wait Waits
Event Waits Timeouts Time (s) (ms) /txn
db file sequential read 60,550 0 300 5 110.3
global cache cr request 130,852 106 177 1 238.3
db file scattered read 72,915 0 101 1 132.8
db file parallel read 3,384 0 75 22 6.2
latch free 7,253 1,587 52 7 13.2
enqueue 44,947 0 16 0 81.9
log file parallel write 2,140 0 6 3 3.9
db file parallel write 341 0 5 14 0.6
global cache open x 1,134 3 4 4 2.1
CGS wait for IPC msg 166,993 164,390 4 0 304.2
library cache lock 3,169 0 3 1 5.8
log file sync 494 0 3 5 0.9
row cache lock 702 0 3 4 1.3
DFS lock handle 6,900 0 2 0 12.6
control file parallel write 689 0 2 3 1.3
control file sequential read 2,785 0 2 1 5.1
wait for master scn 687 0 2 2 1.3
global cache null to x 699 0 2 2 1.3
global cache s to x 778 5 1 2 1.4
direct path write 148 0 0 3 0.3
SQL*Net more data to client 3,621 0 0 0 6.6
global cache open s 149 0 0 2 0.3
library cache pin 78 0 0 2 0.1
ksxr poll remote instances 3,536 2,422 0 0 6.4
LGWR wait for redo copy 12 6 0 9 0.0
buffer busy waits 23 0 0 5 0.0
direct path read 9 0 0 10 0.0
buffer busy global CR 5 0 0 17 0.0
SQL*Net break/reset to clien 172 0 0 0 0.3
global cache quiesce wait 4 1 0 7 0.0
KJC: Wait for msg sends to c 86 0 0 0 0.2
BFILE get length 67 0 0 0 0.1
global cache null to s 9 0 0 1 0.0
BFILE open 6 0 0 0 0.0
BFILE read 60 0 0 0 0.1
BFILE internal seek 60 0 0 0 0.1
BFILE closure 6 0 0 0 0.0
cr request retry 4 4 0 0 0.0
SQL*Net message from client 201,907 0 180,583 894 367.8
gcs remote message 171,236 43,749 3,975 23 311.9
ges remote message 79,324 40,722 2,017 25 144.5
SQL*Net more data from clien 447 0 9 21 0.8
SQL*Net message to client 201,901 0 0 0 367.8
Background Wait Events for DB: ai Instance: ai11 Snaps: 175 -176
-> ordered by wait time desc, waits desc (idle events last)
Avg
Total Wait wait Waits
Event Waits Timeouts Time (s) (ms) /txn
enqueue 44,666 0 16 0 81.4
latch free 4,895 108 6 1 8.9
log file parallel write 2,140 0 6 3 3.9
db file parallel write 341 0 5 14 0.6
CGS wait for IPC msg 166,969 164,366 4 0 304.1
DFS lock handle 6,900 0 2 0 12.6
control file parallel write 689 0 2 3 1.3
control file sequential read 2,733 0 2 1 5.0
wait for master scn 687 0 2 2 1.3
ksxr poll remote instances 3,528 2,419 0 0 6.4
LGWR wait for redo copy 12 6 0 9 0.0
db file sequential read 7 0 0 9 0.0
global cache null to x 26 0 0 2 0.0
global cache open x 16 0 0 1 0.0
global cache cr request 1 0 0 0 0.0
rdbms ipc message 28,636 20,600 16,937 591 52.2
gcs remote message 171,254 43,746 3,974 23 311.9
pmon timer 708 686 2,022 2856 1.3
ges remote message 79,305 40,719 2,017 25 144.5
smon timer 125 0 1,972 15776 0.2
SQL ordered by Gets for DB: ai Instance: ai11 Snaps: 175 -176
-> End Buffer Gets Threshold: 10000
-> Note that resources reported for PL/SQL includes the resources used by
all SQL statements called within the PL/SQL code. As individual SQL
statements are also reported, it is possible and valid for the summed
total % to exceed 100
CPU Elapsd
Buffer Gets Executions Gets per Exec %Total Time (s) Time (s) Hash Value
17,503,305 84 208,372.7 51.1 676.25 1218.80 17574787
SELECT /*+ INDEX (OM_COURSE_FEE OCF_CFC_CODE) */DISTINCT CF_COU
RSE_CODE FROM OM_COURSE_FEE,OT_STUDENT_FEE_COL_HEAD,OT_STUDENT
FEECOL_DETL WHERE SFCH_SYS_ID = SFCD_SFCH_SYS_ID AND SFCD_FE
E_TYPE_CODE = CF_TYPE_CODE AND CF_COURSE_CODE IN ( 'PE1','PE2',
'CCT','CPT' ) AND SFCH_TXN_CODE = :b1 AND SFCD_SUBSCRBE_JOURNAuser00726 wrote:
somw of the changes that have been done....
cai11_lmd0_15771.trc.
Thu Oct 2 13:35:48 2008
CKPT: Begin resize of buffer pool 3 (DEFAULT for block size 8192)
CKPT: Current size = 2512 MB, Target size = 3072 MB
CKPT: Resize completed for buffer pool DEFAULT for blocksize 8192
Thu Oct 2 13:35:50 2008
ALTER SYSTEM SET db_cache_size='3072M' SCOPE=BOTH SID='icai11';
Thu Oct 2 14:04:34 2008
ALTER SYSTEM SET sga_max_size='5244772679680' SCOPE=SPFILE SID='*';
ALTER SYSTEM SET sga_max_size='5244772679680' SCOPE=SPFILE SID='*';
Thu Oct 2 15:24:14 2008
CKPT: Begin resize of buffer pool 3 (DEFAULT for block size 8192)
CKPT: Current size = 3072 MB, Target size = 2512 MB
CKPT: Resize completed for buffer pool DEFAULT for blocksize 8192
Thu Oct 2 15:24:20 2008
ALTER SYSTEM SET db_cache_size='2512M' SCOPE=BOTH SID='icai11';
Thu Oct 2 15:32:33 2008
CKPT: Begin resize of buffer pool 3 (DEFAULT for block size 8192)
CKPT: Current size = 2512 MB, Target size = 3072 MB
CKPT: Resize completed for buffer pool DEFAULT for blocksize 8192
Thu Oct 2 15:32:34 2008
ALTER SYSTEM SET db_cache_size='3072M' SCOPE=BOTH SID='icai11';
Thu Oct 2 15:36:46 2008
ALTER SYSTEM SET shared_pool_size='640M' SCOPE=BOTH SID='icai11';
Thu Oct 2 16:33:52 2008
CKPT: Begin resize of buffer pool 3 (DEFAULT for block size 8192)
CKPT: Current size = 3072 MB, Target size = 2512 MB
CKPT: Resize completed for buffer pool DEFAULT for blocksize 8192
Thu Oct 2 16:33:56 2008
ALTER SYSTEM SET db_cache_size='2512M' SCOPE=BOTH SID='icai11';
Thu Oct 2 16:39:30 2008
ALTER SYSTEM SET pga_aggregate_target='750M' SCOPE=BOTH SID='icai11';Just to make certain that I am not missing anything, if the above you set (all scaled to GB just for the sake of comparison):
sga_max_size=4885GB
db_cache_size=3GB
shared_pool_size=0.625GB
pga_aggregate_target=0.733GB
The SQL statement is forcing the use of a specific index, is this the best index?:
SELECT /*+ INDEX (OM_COURSE_FEE OCF_CFC_CODE) */DISTINCT CF_COURSE_CODE FROM OM_COURSE_FEE,OT_STUDENT_FEE_COL_HEAD,OT_STUDENT_FEE_COL_DETL WHERE SFCH_SYS_ID = SFCD_SFCH_SYS_ID AND
SFCD_FEE_TYPE_CODE = CF_TYPE_CODE AND CF_COURSE_CODE IN ( 'PE1','PE2','CCT','CPT' ) AND SFCH_TXN_CODE = :b1 AND SFCD_SUBSCRBE_JOURNAL_YN IN ( 'T','R','1','C' ) AND SFCH_APPR_UID IS NOT NULL
AND SFCH_APPR_DT IS NOT NULL AND SFCH_NO = :b2 AND NOT EXISTS (SELECT 'X' FROM OM_STUDENT_HEAD WHERE STUD_SRN = SFCH_STUD_SRN_TEMP_NO AND NVL(STUD_CLO_STATUS,0) != 1 AND
NVL(STUD_REG_STATUS,0) != 23 AND STUD_COURSE_CODE != 'CCT' AND CF_COURSE_CODE= STUD_COURSE_CODE ) ORDER BY 1 DESC
Unfortunately, explain plans, even with dbms_xplan.display, may not show the actual execution plan - this is more of a problem when the SQL statement includes bind variables (capturing a 10046 trace at level 8 or 12 will help). With the information provided, it looks like the problem is with the number of logical reads performed: 17,503,305 in 84 executions = 208,373 logical reads per execution. Something is causing the SQL statement to execute inefficiently, possibly a join problem between tables, possibly the forced use of an index.
From one of my previous posts related to this same SQL statement:
SELECT /*+ INDEX (OM_COURSE_FEE OCF_CFC_CODE) */ DISTINCT
CF_COURSE_CODE
FROM
OM_COURSE_FEE,
OT_STUDENT_FEE_COL_HEAD,
OT_STUDENT_FEE_COL_DETL
WHERE
SFCH_SYS_ID = SFCD_SFCH_SYS_ID
AND SFCD_FEE_TYPE_CODE = CF_TYPE_CODE
AND CF_COURSE_CODE IN ( 'PE1','PE2','CCT','CPT' )
AND SFCH_TXN_CODE = :b1
AND SFCD_SUBSCRBE_JOURNAL_YN IN ( 'T','R','1','C' )
AND SFCH_APPR_UID IS NOT NULL
AND SFCH_APPR_DT IS NOT NULL
AND SFCH_NO = :b2
AND NOT EXISTS (
SELECT
'X'
FROM
OM_STUDENT_HEAD
WHERE
STUD_SRN = SFCH_STUD_SRN_TEMP_NO
AND NVL(STUD_CLO_STATUS,0) != 1
AND NVL(STUD_REG_STATUS,0) != 23
AND STUD_COURSE_CODE != 'CCT'
AND CF_COURSE_CODE = STUD_COURSE_CODE)
ORDER BY
1 DESC
| Id | Operation | Name | Rows | Bytes | Cost | Pstart| Pstop |
| 0 | SELECT STATEMENT 1 69 34
| 1 | SORT UNIQUE 1 69 22
| 2 | TABLE ACCESS BY GLOBAL INDEX ROWID | OT_STUDENT_FEE_COL_DETL | 1 | 12 | 2 | ROWID | ROW L |
| 3 | NESTED LOOPS 1 69 9
| 4 | NESTED LOOPS 1 57 7
| 5 | TABLE ACCESS BY GLOBAL INDEX ROWID | OT_STUDENT_FEE_COL_HEAD | 1 | 48 | 5 | ROWID | ROW L |
| 6 | INDEX SKIP SCAN OT_STUDENT_FEE_COL_HEAD_UK0| 1 | 1 | 4 |
| 7 | INLIST ITERATOR |
| 8 | TABLE ACCESS BY INDEX ROWID | OM_COURSE_FEE | 1 | 9 | 2 | | |
| 9 | INDEX RANGE SCAN | OCF_CFC_CODE | 1 | | 1 | | |
| 10 | FILTER
| 11 | TABLE ACCESS BY GLOBAL INDEX ROWID| OM_STUDENT_HEAD | 1 | 21 | 4 | ROWID | ROW L |
| 12 | INDEX RANGE SCAN | IDM_STUD_SRN_COURSE | 1 | | 3 | | |
| 13 | INDEX RANGE SCAN | IDM_SFCD_FEE_TYPE_CODE | 34600 | | 1 | | |It appears, based just on the SQL statement, that there is no direct relationship between OM_COURSE_FEE and OT_STUDENT_FEE_COL_HEAD, yet the plan above indicates that the two tables are being joined together, likely as a result of the index hint. There is the possibility of additional predicates being generated by Oracle which will make this possible without introducing a Cartesian join, but those predicates are not displayed with an explain plan (they will appear with a DBMS_XPLAN). I may not be remembering correctly, but if the optimizer goal is set to first rows, a Cartesian join might appear in a plan as a nested loops join - Jonathan will know for certain if that is the case.
Have you investigated the above as a possible cause? If you know that there is a problem with this one SQL statement, a 10046 trace at level 8 or 12 is much more helpful than a Statspack report.
Charles Hooper
IT Manager/Oracle DBA
K&M Machine-Fabricating, Inc. -
Need help on Reading BFILE, CLOB
Hi all
i am using Linux fedora 4
i have created a directory
create directory clob_dir as '/db10g/my_dir';
and created a table from examples of a book i have
CREATE TABLE BFILE_DOC
( ID INTEGER,
BFILE_CLMN BFILE
And a procedure to read a text file contents
CREATE OR REPLACE
PROCEDURE FILE_EXAMPLE (id_var in number) AS
SRC_BFILE_VAR BFILE;
DIR_ALIAS_VAR VARCHAR2(50);
FILENAME_VAR VARCHAR2(50);
CHUNK_SIZE_VAR INTEGER;
LENGTH_VAR INTEGER;
CHARS_READ_VAR INTEGER;
DEST_CLOB_VAR NCLOB;
AMOUNT_VAR INTEGER := 80;
DEST_OFFSET_VAR INTEGER := 1;
SRC_OFFSET_VAR INTEGER := 1;
CHAR_BUFFER_VAR VARCHAR2(160);
BEGIN
SELECT BFILE_CLMN
INTO SRC_BFILE_VAR
FROM BFILE_DOC
WHERE ID = id_var;
DBMS_LOB.CREATETEMPORARY(DEST_CLOB_VAR, TRUE);
IF (DBMS_LOB.FILEEXISTS(SRC_BFILE_VAR) = 1) THEN
IF (DBMS_LOB.FILEISOPEN(SRC_BFILE_VAR) = 0) THEN
DBMS_LOB.FILEOPEN(SRC_BFILE_VAR);
DBMS_LOB.FILEGETNAME(SRC_BFILE_VAR,
DIR_ALIAS_VAR,
FILENAME_VAR);
DBMS_OUTPUT.PUT_LINE('DIRECTORY NAME: ' || DIR_ALIAS_VAR);
DBMS_OUTPUT.PUT_LINE('FILE NAME : ' || FILENAME_VAR );
CHUNK_SIZE_VAR := DBMS_LOB.GETCHUNKSIZE(DEST_CLOB_VAR);
DBMS_OUTPUT.PUT_LINE('CHUNK SIZE : ' || CHUNK_SIZE_VAR);
LENGTH_VAR := DBMS_LOB.GETLENGTH(SRC_BFILE_VAR);
DBMS_OUTPUT.PUT_LINE('LENGTH : ' || LENGTH_VAR);
CHARS_READ_VAR := 0;
WHILE (CHARS_READ_VAR < LENGTH_VAR) LOOP
IF (LENGTH_VAR - CHARS_READ_VAR < AMOUNT_VAR) THEN
AMOUNT_VAR := LENGTH_VAR - CHARS_READ_VAR;
END IF;
DBMS_LOB.LOADFROMFILE(DEST_CLOB_VAR, SRC_BFILE_VAR,
AMOUNT_VAR, DEST_OFFSET_VAR,
SRC_OFFSET_VAR + CHARS_READ_VAR);
DBMS_LOB.READ(DEST_CLOB_VAR,
AMOUNT_VAR,
SRC_OFFSET_VAR,
CHAR_BUFFER_VAR);
DBMS_OUTPUT.PUT_LINE('CHAR BUFFER VAR:' || CHAR_BUFFER_VAR);
CHARS_READ_VAR := CHARS_READ_VAR + AMOUNT_VAR;
END LOOP;
END IF ;
END IF;
DBMS_LOB.FILECLOSE(SRC_BFILE_VAR);
DBMS_LOB.FREETEMPORARY(DEST_CLOB_VAR);
END FILE_EXAMPLE;
then i post a text file ( the size is 456.5 KB) in MY_DIR FOLDER
the i inserted the informatin in that table
INSERT INTO BFILE_DOC( ID , BFILE_CLMN)
VALUES (1, BFILENAME('CLOB_DIR', 'abc.txt'));
Then i called the procedure
SQL> call file_example(1);
here what i got
CHAR BUFFER VAR:噁‰㐳㔠〵㨲㔠††††††††††††††䥓单䕄㨠〷⼳〯〶†䩅䐠䉁卅†ਠⴭⴭⴭⴭ
CHAR BUFFER VAR:†††ਠ㈹䙒††††㜷䈠⁇噁‰㔳㔠⁆剁‰†䥓单䕄㨠〷⼳〯〶†䩅䐠䉁卅†ਠⴭⴭⴭⴭ
CHAR BUFFER VAR:㜰㔠〱㨳〠††††††††††††††††䥓单䕄㨠〷⼳〯〶†䩅䐠䉁卅†ਠⴭⴭⴭⴭ
CHAR BUFFER VAR:†ਠ††††††䵁剉呉䴠䡏呅䰠††††††䥓单䕄㨠〷⼳〯〶†䩅䐠䉁卅†ਠⴭⴭⴭⴭ
CHAR BUFFER VAR:††††〶㨵㔠〹㨵㔠〶㨵㔠⁆剁†㜷㨴〠ਠ†䥓单䕄㨠〷⼳〯〶†䩅䐠䉁卅†ਠⴭⴭⴭⴭ
CHAR BUFFER VAR:〲䵏†‱㠶†㜷䈠⁆剁‱㈴㔠⁇噁‱㐰〠〱㨱†䥓单䕄㨠〷⼳〯〶†䩅䐠䉁卅†ਠⴭⴭⴭⴭ
call file_example(1)
ERROR at line 1:
ORA-20000: ORU-10027: buffer overflow, limit of 1000000 bytes
ORA-06512: at "SYS.DBMS_OUTPUT", line 32
ORA-06512: at "SYS.DBMS_OUTPUT", line 97
ORA-06512: at "SYS.DBMS_OUTPUT", line 112
ORA-06512: at "SCOTT.FILE_EXAMPLE", line 51
and here a sample of the content in the text file if i open it with text editor
1 MO REPORT AT 12:35 EFFECTIVE SEP 04-SEP 18
1 SA
DAY FLT: EQP DEPARTS ARRIVES BLK. BLK: DUTY CR: LAYOVER
MO 034 744 JED 1405 RUH 1540 01:35
SAHARA AIRPORT HOTEL 01:35 03:35 01:35 RUH 19:05
TU 1905 AB3 RUH 1045 TIF 1215 01:30
TU 1904 AB3 TIF 1315 RUH 1440 01:25
TU 1783 AB3 RUH 1540 GIZ 1730 01:50
TU 1777 AB3 GIZ 1820 JED 1930 01:10
05:55 10:45 05:55
CREDIT_HRS: 07:30 BLK_HRS: 07:30 LDGS: 5 TAFB 31:25
2 MO REPORT AT 21:25 EFFECTIVE SEP 04-SEP 18
1 SA
DAY FLT: EQP DEPARTS ARRIVES BLK. BLK: DUTY CR: LAYOVER
MO 113 77B JED 2255 LHR 0520 06:25
HILTON LONDON KENSIN 06:25 08:25 06:25 LHR 38:25
WE 116 77B LHR 1945 JED 0150 06:05
06:05 08:05 06:05
CREDIT_HRS: 12:30 BLK_HRS: 12:30 LDGS: 2 TAFB 52:55
( the text file has more than 6000 line)
notes:
my ENV file ( bash_profile)
# .db10g
# Get the aliases and functions
if [ -f ~/.bashrc ]; then
. ~/.bashrc
fi
# User specific environment and startup programs
PATH=$PATH:$HOME/bin; export PATH
ORACLE_BASE=/db10g/app/oracle; export ORACLE_BASE
ORACLE_HOME=$ORACLE_BASE/product/10.2.0/db_1; export ORACLE_HOME
ORACLE_SID=orcl; export ORACLE_SID
ORACLE_TERM=xterm; export ORACLE_TERM
PATH=/usr/sbin:$PATH; export PATH
PATH=$ORACLE_HOME/bin:$PATH; export PATH
NLS_LANG=AMERICAN_AMERICA.AL32UTF8; export NLS_LANG
LD_LIBRARY_PATH=$ORACLE_HOME/lib:/lib:/usr/lib; export LD_LIBRARY_PATH
CLASSPATH=$ORACLE_HOME/JRE:$ORACLE_HOME/jlib:$ORACLE_HOME/rdbms/jlib; export CLASSPATH
unset USERNAME
The charset of the database is set to AL32UTF8. during the installtion time
The questions :
Whats wrong here?
why i can not read the content of the txt file as it is ?
i tried this issue in windows xp with SQL*PLUS, i got the same chars but no error.
what to do to avoid the errors and to see the text file as its?
how to load the content of the text file to CLOB Column if it is larger than 4000?
thanksset serveroutput on
BEGIN
-- Enable DBMS_OUTPUT and set the buffer size. The buffer size can be between 1 and 1,000,000
dbms_output.enable(1000000);
END;
check the datatypes of the supplied packages from the doumentation;
dbms_output.put_line(item VARCHAR2);
dbms_lob.read(
lob_loc IN BLOB,
amount IN OUT NOCOPY INTEGER,
offset IN INTEGER,
buffer OUT RAW);
also sample usage of the package will help -> http://psoug.org/reference/dbms_lob.html -
To BFILE or to BLOB that is the question ?
G'day all,
I'm looking for some advice on PROs and CONS on using BFILE or BLOB. Currently I'm using BLOB to store images which is fine I guess on Oracle Std and Enterprise but when I deploy the application to customers on a laptop I'm going to use OracleXE.
All performance looks good for Oracle XE on laptop, but a question was raised that I should have considered myself (But sliped my mind) was about how Oracle XE has 4GB Tablespace limit and how that impacts us when our database gets fully populated with Images.
Bugger, that 4Gig will get chewed up fairly quickly :( So I'm now wondering if I should use BFILE, that should mean the Image disk usage on the Oracle Tablespace will nolonger apply as the image data is external to the database ? right ? ...
I was wondering if any one could confirm my thoughts, and if possible give me some PROs / CONs or Catch 22's if I do go down the BFILE path ? Obviously I'm aware of directory permissions and backup policy now mean the images need to be backed up externally. But anything else critical to BFILEs would be most appreciated.
Regards
MarkG'day All,
Just an update, I've sucessfully modified the APEX 3.0 DEMO application to use BFILE instead of BLOB for DEMO_IMAGES. I've now got a good set of PROCEDURES and FUCTIONS to work with BFILE especially an equivalent CUSTOM_IMAGE_DISPLAY for show BFILE images on web pages etc. A few lessons learnt, but one of the big ones that cause a small flap after all my hard work, was when I ran a test and uploaded 10 images of 6 mb each the APEX SYSAUX table space still increased by 65mb even though I had deleted the BLOB from the work flow table after storing the a as BFILE.
After doing some research I located this webiste (http://halisway.blogspot.com/2007/06/reclaiming-lob-space-in-oracle.html) which explains that even though we delete records with BLOBs the LOB space is not automatically reclaimed so as the SYS user ran the following command and then rechecked the SYSAUX tablespace and all was good.
NB: The bloody table that is used to store the uploaded files is located under FLOWS_FILES so make sure your attached to that schema else alter table complains thats it can not see table / view.
alter session set current_schema=FLOWS_FILES;
alter table wwv_flow_file_objects$ modify lob (blob_content) (shrink space);
So now Oracle XE shouldn't run out tablespace too quickly now, as long as I run the above maintenance commands to reclaim space.
One happy chappy.
Mark -
Problem with Firefox and very heavy memory usage
For several releases now, Firefox has been particularly heavy on memory usage. With its most recent version, with a single browser instance and only one tab, Firefox consumes more memory that any other application running on my Windows PC. The memory footprint grows significantly as I open additional tabs, getting to almost 1GB when there are 7 or 8 tabs open. This is as true with no extensions or pluggins, and with the usual set, (firebug, fire cookie, fireshot, XMarks). Right now, with 2 tabs, the memory is at 217,128K and climbing, CPU is between 0.2 and 1.5%.
I have read dozens of threads providing "helpful" suggestions, and tried any that seem reasonable. But like most others who experience Firebug's memory problems, none address the issue.
Firefox is an excellent tool for web developers, and I rely on it heavily, but have now resorted to using Chrome as the default and only open Firefox when I must, in order to test or debug a page.
Is there no hope of resolving this problem? So far, from responses to other similar threads, the response has been to deny any responsibility and blame extensions and/or pluggins. This is not helpful and not accurate. Will Firefox accept ownership for this problem and try to address it properly, or must we continue to suffer for your failings?55%, it's still 1.6Gb....there shouldn't be a problem scanning something that it says will take up 300Mb, then actually only takes up 70Mb.
And not wrong, it obviously isn't releasing the memory when other applications need it because it doesn't, I have to close PS before it will release it. Yes, it probably is supposed to release it, but it isn't.
Thank you for your answer (even if it did appear to me to be a bit rude/shouty, perhaps something more polite than "Wrong!" next time) but I'm sitting at my computer, and I can see what is using how much memory and when, you can't. -
Problem with scanning and memory usage
I'm running PS CS3 on Vista Home Premium, 1.86Ghz Intel core 2 processor, and 4GB RAM.
I realise Vista only sees 3.3GB of this RAM, and I know Vista uses about 1GB all the time.
Question:
While running PS, and only PS, with no files open, I have 2GB of RAM, why will PS not let me scan a file that it says will take up 300Mb?
200Mb is about the limit that it will let me scan, but even then, the actual end product ends up being less than 100Mb. (around 70mb in most cases)I'm using a Dell AIO A920, latest drivers etc, and PS is set to use all avaliable RAM.
Not only will it not let me scan, once a file I've opened has used up "x" amount of RAM, even if I then close that file, "x" amount of RAM will STILL be unavaliable. This means if I scan something, I have to save it, close PS, then open it again before I can scan anything else.
Surely this isn't normal. Or am I being stupid and missing something obvious?
I've also monitored the memory usage during scanning using task manager and various other things, it hardly goes up at all, then shoots up to 70-80% once the 70ishMb file is loaded. Something is up because if that were true, I'd actually only have 1Gb of RAM, and running Vista would be nearly impossible.
It's not a Vista thing either as I had this problem when I had XP. In fact it was worse then, I could hardly scan anything, had to be very low resolution.
Thanks in advance for any help55%, it's still 1.6Gb....there shouldn't be a problem scanning something that it says will take up 300Mb, then actually only takes up 70Mb.
And not wrong, it obviously isn't releasing the memory when other applications need it because it doesn't, I have to close PS before it will release it. Yes, it probably is supposed to release it, but it isn't.
Thank you for your answer (even if it did appear to me to be a bit rude/shouty, perhaps something more polite than "Wrong!" next time) but I'm sitting at my computer, and I can see what is using how much memory and when, you can't. -
Problem with JTree and memory usage
I have problem with the JTree when memory usage is over the phisical memory( I have 512MB).
I use JTree to display very large data about structure organization of big company. It is working fine until memory usage is over the phisical memory - then some of nodes are not visible.
I hope somebody has an idea about this problem.55%, it's still 1.6Gb....there shouldn't be a problem scanning something that it says will take up 300Mb, then actually only takes up 70Mb.
And not wrong, it obviously isn't releasing the memory when other applications need it because it doesn't, I have to close PS before it will release it. Yes, it probably is supposed to release it, but it isn't.
Thank you for your answer (even if it did appear to me to be a bit rude/shouty, perhaps something more polite than "Wrong!" next time) but I'm sitting at my computer, and I can see what is using how much memory and when, you can't. -
I reset my cellular data usage every month at the beginning of my billing period so that I can keep track of how much data I am using thru out the month. I do not have the unlimited data plan and have gone over a few times. It's usually the last couple days of my billing cycle and I am charged $10 for an extra gig which I barely use 10% of before my usage is restarted for new billing cycle. FYI, they do not carry the remainder of the unused gig that you purchase for $10 which I disagree with. But have accepted. Lol Moving on.
I have two questions. One possibly being answered by the other. 1. Is cellular data used when I sync to iTunes on my home computer to load songs onto my iPhone from my iTunes library(songs that already exist)? 2. What causes the cellular data usage readings in iPhone settings/general/usage/cellular usage to decrease if my last reset has not changed? It is close to end of my billing cycle and I have been keeping close eye. Earlier today it read around 180mb sent/1.2gb received. Now it reads 90mb sent/980 mb recieved. My last reset date reads the same as before. I don't know if my sync and music management had anything to do with the decrease but i didn't notice the decrease until after I had connected to iTunes and loaded music. I should also mention that a 700mb app automatically loaded itself to my phone during the sync process. Is cellular data used at all during any kind of sync /iPhone/iTunes management? Is the cellular data usage reading under iPhone settings a reliable source to keep track of monthly data usage? Guess turned out to be more than two questions but all related. Thanks in advance for any help you can offer me. It is information I find valuable. Sorry for the book long question.Is cellular data used at all during any kind of sync /iPhone/iTunes management? Is the cellular data usage reading under iPhone settings a reliable source to keep track of monthly data usage?
1) No.
2) It does provide an estimated usage, but it's not accurate. For accurate determination, you should check the remaining/used MBs from the carrier (most of the carriers provide this service for free). -
Background: My home does not have access to landline internet and I am relying on a Verizon Jetpack device rather than satellite cable for internet access. Within my JetPack wireless network I have several devices connected, iPADs, PC’s, printer, etc. (up to 5 devices can be connected with my system). Recently, I had one device within my immediate wireless network that I needed to transfer data via FTP to another local device within my immediate network. The receiving device FTP’d data from device (198.168.1.42) to local IP (address 192.168.1.2). To understand the data path a Trace rout for this transfer is 198.168.1.42 (source) to 198.168.1.1 (Jetpack) to 192.168.1.2 (receiver). Note that the VZW network ((cloud)/4G network) was NOT used in this transfer nor was any VZW bandwidth used for this transfer (of course less the typical overhead status/maintenance communications that happen regardless). Use of the VZW bandwidth would be something like downloading a movie from Netflix, (transferring data from a remote site, through the VZW network, into the Jetpack and to the target device). I also understand if ones devices have auto SW updates that would also go against the usage as well. I get that. My understanding of my usage billing is based data transfer across the VZW network.
Now to the problem: To my surprise I was quite substantially charged for the “internal” direct transfer of this data although I didn’t use the VZW network at all. This does not seem to be right, and doesn’t make much sense to me. Usage is should be based on the VZW network not a local Wi-Fi. In this case, Verizon actually gets free money from me without any use of or impact to the VZW network. Basically this is a VERY expensive rental for the jetpack device, which I bought anyway. Considering this, I am also charged each time I print locally. Dive into this further, I am also interested in knowing what is the overhead (non-data) communications between the jetpack router and devices and how much does that add up to over time?
Once I realized I was getting socked in bandwidth, as a temp solution I found an old Wi-Fi router, created a separate Wi-Fi network off the Jetpack, but the billing damage was already done. Switching each device back and forth to FTP and print is a hassle and there should be no reason the existing hardware couldn’t handle this and charges aligned with VSW usage. Is purposely intended by Verizon? Is this charging correct? And can I get some help with this?
Logically, usage should be based on VZW network usage not internal transfers. All transfers between IP addresses 192.168 are by default internal. Any data that needs to leave the internal network are translated in to the dynamic IP addresses established by the VZW network. That should be very easily detected for usage. In the very least, this fact needs to be clearly identified and clarified to users of the Jetpack. How would one use a local network and not get socked with usage charges? Can one set up a Wi-Fi network with another router, hardwire directly from the router to the Jetpack so that only data to and from the VZW network is billed? I might be able to figure out how to have the jetpack powered on but disable the VZW connection, but I don’t want to experiment and find out that the internal transfers are being logged and the log sent after the fact anyway once I connect…. A reasonable solution should be that users be able to use the router functions of the Jetpack (since one has to buy the device anyway) and only be billed for VZW usage.
Your help in this would be greatly appreciated. Thanksi had one mac and spilt water on it, the motherboard fried so i had to buy a used one...being in school and all. it is a MC375lla
Model Name:
MacBook Pro
Model Identifier:
MacBookPro7,1
Processor Name:
Intel Core 2 Duo
Processor Speed:
2.66 GHz
Number of Processors:
1
Total Number of Cores:
2
L2 Cache:
3 MB
Memory:
8 GB
Bus Speed:
1.07 GHz
Boot ROM Version:
MBP71.0039.B0E
SMC Version (system):
1.62f7
Hardware UUID:
A802DE22-1E57-5509-93C5-27CEF01377B7
Sudden Motion Sensor:
State:
Enabled
i do not have a backup of it, so i am thinking about replacing my old hard drive from the water damaged into this one, not even sure if that would work, but it did not seem to be damaged, as i recovered all the files i wanted off of it to put onto this mbp
the previous owner didnt have it set to boot, they had all their settings left on it and tried to edit all the names on it, had a bunch of server info and printers etc crap on it. i do not believe he edited the terminal system though--he doesnt seem to terribly bright(if thats possible)
tbh i hate lion compared to the old one i had, this one has so many more issues-overheating,fan noise, cd dvd noise
if you need screenshots or data of anything else as away
[problem is i do not want to start from scratch if there is a chance of fixing it, this one did not come with disks or anything like my first. so i dont even know if i could, and how it sets now i am basically starting from scratch, because now all my apps are reset but working, i am hoping to get my data back somehow though, i lost all of my bookmarks and editing all my apps and setting again would be a pain -
High memory usage when many tabs are open (and closed)
When I open Firefox the memory usage is about 70-100 MB RAM. When I'm working with Firefox I often open 15-20 tabs at once, then I close them and open others. Memory usage increaes up to 450 - 500 MB RAM. After closing the tabs the usage usually decreases, but after sometime. It starts decreasing very slow and never comes back to the level from the beginning. After few hours of work Firefox uses about 400 MB RAM even if one tab is open. First I thought it's because of my plugins (Firebug, Speed Dial, Adlock Plus) but I've checked it and it's not. I reinstalled Firefox but the problem occurs as well. I'm not sure if it's normal. Could you help me, please?
Hi,
Not an exact answer but please [http://kb.mozillazine.org/Reducing_memory_usage_(Firefox) see this.] The KB article ponders through various general situations which may or may not be applicable to specific instances but nevertheless could be helpful.
Useful links:
[https://support.mozilla.com/en-US/kb/Options%20window All about Tools > Options]
[http://kb.mozillazine.org/About:config Going beyond Tools > Options - about:config]
[http://kb.mozillazine.org/About:config_entries about:config Entries]
[https://addons.mozilla.org/en-US/firefox/addon/whats-that-preference/ What's That Preference? add-on] - Quickly decode about:config entries - After installation, go inside about:config, right-click any preference, enable (tick) MozillaZine Results to Panel and again right-click a pref and choose MozillaZine Reference to begin with.
[https://support.mozilla.com/en-US/kb/Keyboard%20shortcuts Keyboard Shortcuts]
[http://kb.mozillazine.org/Profile_folder_-_Firefox Firefox Profile Folder & Files]
[https://support.mozilla.com/en-US/kb/Safe%20Mode Safe Mode]
[http://kb.mozillazine.org/Problematic_extensions Problematic Extensions/Add-ons]
[https://support.mozilla.com/en-US/kb/Troubleshooting%20extensions%20and%20themes Troubleshooting Extensions and Themes] -
Hi
I am working in a sharepoint migration project. We have migrated one SharePoint project from moss2007 to sp2013. Issue is when we are clicking on Popularity trend > usage report, it is throwing an error.
Issue: The data was not being processed to EVENT STORE folder which was present under the
Analytics_GUID folder. Also data was not present in the Analytical Store database.
In log viewer I have found the bellow error.
HIGH -
SearchServiceApplicationProxy::GetAnalyticsEventTypeDefinitions--Error occured: System.ServiceModel.Security.MessageSecurityException: An unsecured or incorrectly
secured fault was received from the other party.
UNEXPECTED - System.ServiceModel.FaultException`1[[System.ServiceModel.ExceptionDetail,
System.ServiceModel, Version=4.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089]]: We're sorry, we weren't able to complete the operation, please try again in a few minutes.
HIGH - Getting Error Message for Exception System.Web.HttpUnhandledException
(0x80004005): Exception of type 'System.Web.HttpUnhandledException' was thrown. ---> System.ServiceModel.Security.MessageSecurityException: An unsecured or incorrectly secured fault was received from the other party.
CRITICAL - A failure was reported when trying to invoke a service application:
EndpointFailure Process Name: w3wp Process ID: 13960 AppDomain Name: /LM/W3SVC/767692721/ROOT-1-130480636828071139 AppDomain ID: 2 Service Application Uri: urn:schemas-microsoft-
UNEXPECTED - Could not retrieve analytics event definitions for
https://XXX System.ServiceModel.FaultException`1[System.ServiceModel.ExceptionDetail]: We're sorry, we weren't able to complete the operation, please try again in a few minutes.
UNEXPECTED - System.ServiceModel.FaultException`1[[System.ServiceModel.ExceptionDetail,
System.ServiceModel, Version=4.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089]]: We're sorry, we weren't able to complete the operation, please try again in a few minutes.
I have verified few things in server which are mentioned below
Two timer jobs (Microsoft SharePoint Foundation Usage Data Processing, Microsoft SharePoint Foundation Usage Data Import) are running fine.
APPFabric Caching service has been started.
Analytics_GUID folder has been
shared with
WSS_ADMIN_WPG and WSS_WPG and Read/Write access was granted
.usage files are getting created and also the temporary(.tmp) file has been created.
uasage logging database for uasage data being transported. The data is available.
Please provide pointers on what needs to be done.Hi Nabhendu,
According to your description, my understanding is that you could not use popularity trend after you migrated SharePoint 2007 to SharePoint 2013.
In SharePoint 2013, the analytics functionality is a part of the search component. There is an article for troubleshooting SharePoint 2013 Web Analytics, please take a look at:
Troubleshooting SharePoint 2013 Web Analytics
http://blog.fpweb.net/troubleshooting-sharepoint-2013-web-analytics/#.U8NyA_kabp4
I hope this helps.
Thanks,
Wendy
Wendy Li
TechNet Community Support -
Memory usage of excel stays high after Macro is executed and excel crashes after trying to close it
Hi,
I'm trying to resolve an issue with an excel based tool. The macros retrieve data from an Oracle database and do calculations with the data. They also open and write into files in the same directory. The macros all run and finish the calculations. I can
continue to use and modify the sheet. I can also close the workbook, however excel memory usage I see in the windows Task manager stays elevated.If I close Excel it says: Excel stopped working and then it tries to recover information...
I assume something in the macro did not finish properly and memory was not released. I would like to check what is still open (connection, stream or any other object) when I close the workbook I would like to have a list of all still used memory. Is there
a possibility to do so.
Here the code I'm using, its reduced to functions which open something. Functions
get_v_tools() and get_change_tools() are same as get_client_positions().
Public conODBC As New ADODB.Connection
Public myPath As String
Sub get_positions()
Dim Src As range, dst As range
Dim lastRow As Integer
Dim myPath As String
lastRow = Sheets("SQL_DATA").Cells(Sheets("SQL_DATA").rows.Count, "A").End(xlUp).Row
Sheets("SQL_DATA").range("A2:AD" & lastRow + 1).ClearContents
Sheets("SQL_DATA").range("AG2:BE" & lastRow + 2).ClearContents
Sheets("SQL_DATA").range("AE3:AF" & lastRow + 2).ClearContents
k = Sheets("ToolsList").Cells(Sheets("ToolsList").rows.Count, "A").End(xlUp).Row + 1
Sheets("ToolsList").range("A2:M" & k).ClearContents
'open connection
Call open_connection
lastRow = Sheets("SQL_DATA").Cells(Sheets("SQL_DATA").rows.Count, "A").End(xlUp).Row
If lastRow < 2 Then GoTo ErrorHandling
'copy bs price check multiplications
Set Src = Sheets("SQL_DATA").range("AE2:AF2")
Set dst = Worksheets("SQL_DATA").range("AE2").Resize(lastRow - 1, Src.columns.Count)
dst.Formula = Src.Formula
On Error GoTo ErrorHandling
'new prices are calculated
newPrice_calculate (lastRow)
Calculate
myPath = ThisWorkbook.Path
'Refresh pivot table in Position Manager
Sheets("Position Manager").PivotTables("PivotTable3").ChangePivotCache ActiveWorkbook. _
PivotCaches.Create(SourceType:=xlDatabase, SourceData:= _
myPath & "\[Position_Manager_v1.0.xlsm]SQL_DATA!R1C2:R" & lastRow & "C31" _
, Version:=xlPivotTableVersion14)
ErrorHandling:
Set Src = Nothing
Set dst = Nothing
If conODBC.State <> 0 Then
conODBC.Close
End If
End Sub
Sub open_connection()
Dim sql_data, sql_data_change, sql_data_v As Variant
Dim wdth, TotalColumns, startRow As Integer
Dim rst As New ADODB.Recordset
Errorcode = 0
On Error GoTo ErrorHandling
Errorcode = 1
With conODBC
.Provider = "OraOLEDB.Oracle.1"
.ConnectionString = "Password=" & pswrd & "; Persist Security Info=True;User ID= " & UserName & "; Data Source=" & DataSource
.CursorLocation = adUseClient
.Open
.CommandTimeout = 300
End With
startRow = Sheets("SQL_DATA").Cells(Sheets("SQL_DATA").rows.Count, "A").End(xlUp).Row + 1
sql_data = get_client_positions(conODBC, rst)
wdth = UBound(sql_data, 1)
Sheets("SQL_DATA").range("A" & startRow & ":AA" & wdth + startRow - 1).Value = sql_data
'Run change tools instruments
startRow = Sheets("ToolsList").Cells(Sheets("ToolsList").rows.Count, "A").End(xlUp).Row + 1
sql_data_change = get_change_tools(conODBC, rst)
wdth = UBound(sql_data_change, 1)
Sheets("ToolsList").range("A" & startRow & ":M" & wdth + startRow - 1).Value _
= sql_data_change
'open SQL for V tools instruments
startRow = Sheets("ToolsList").Cells(Sheets("ToolsList").rows.Count, "A").End(xlUp).Row + 1
sql_data_v = get_v_tools(conODBC, rst)
wdth = UBound(sql_data_v, 1)
Sheets("ToolsList").range("A" & startRow & ":L" & startRow + wdth - 1).Value = sql_data_v
conODBC.Close
ErrorHandling:
If rst.State <> 0 Then
rst.Close
End If
Set rst = Nothing
End Sub
Private Function get_client_positions(conODBC As ADODB.Connection, rst_posi As ADODB.Recordset) As Variant
Dim sql_data As Variant
Dim objCommand As ADODB.Command
Dim sql As String
Dim records, TotalColumns As Integer
On Error GoTo ErrorHandling
Set objCommand = New ADODB.Command
sql = read_sql()
With objCommand
.ActiveConnection = conODBC 'connection for the commands
.CommandType = adCmdText
.CommandText = sql 'Sql statement from the function
.Prepared = True
.CommandTimeout = 600
End With
Set rst_posi = objCommand.Execute
TotalColumns = rst_posi.Fields.Count
records = rst_posi.RecordCount
ReDim sql_data(1 To records, 1 To TotalColumns)
If TotalColumns = 0 Or records = 0 Then GoTo ErrorHandling
If TotalColumns <> 27 Then GoTo ErrorHandling
If rst_posi.EOF Then GoTo ErrorHandling
l = 1
Do While Not rst_posi.EOF
For i = 0 To TotalColumns - 1
sql_data(l, i + 1) = rst_posi.Fields(i)
Next i
l = l + 1
rst_posi.MoveNext
Loop
ErrorHandling:
rst_posi.Close
Set rst_posi = Nothing
Set objCommand = Nothing
get_client_positions = sql_data
End Function
Private Function read_sql() As String
Dim sqlFile As String, sqlQuery, Line As String
Dim query_dt As String, client As String, account As String
Dim GRP_ID, GRP_SPLIT_ID As String
Dim fso, stream As Object
Set fso = CreateObject("Scripting.FileSystemObject")
client = Worksheets("Cover").range("C9").Value
query_dt = Sheets("Cover").range("C7").Value
GRP_ID = Sheets("Cover").range("C3").Value
GRP_SPLIT_ID = Sheets("Cover").range("C5").Value
account = Sheets("Cover").range("C11").Value
sqlFile = Sheets("Cover").range("C15").Value
Open sqlFile For Input As #1
Do Until EOF(1)
Line Input #1, Line
sqlQuery = sqlQuery & vbCrLf & Line
Loop
Close
' Replace placeholders in the SQL
sqlQuery = Replace(sqlQuery, "myClent", client)
sqlQuery = Replace(sqlQuery, "01/01/9999", query_dt)
sqlQuery = Replace(sqlQuery, "54747743", GRP_ID)
If GRP_SPLIT_ID <> "" Then
sqlQuery = Replace(sqlQuery, "7754843", GRP_SPLIT_ID)
Else
sqlQuery = Replace(sqlQuery, "AND POS.GRP_SPLIT_ID = 7754843", "")
End If
If account = "ZZ" Then
sqlQuery = Replace(sqlQuery, "AND AC.ACCNT_NAME = 'ZZ'", "")
Else
sqlQuery = Replace(sqlQuery, "ZZ", account)
End If
' Create a TextStream to check SQL Query
sql = sqlQuery
myPath = ThisWorkbook.Path
Set stream = fso.CreateTextFile(myPath & "\SQL\LastQuery.txt", True)
stream.Write sql
stream.Close
Set fso = Nothing
Set stream = Nothing
read_sql = sqlQuery
End FunctionThanks Starain,
that's what I did the last days and found that the problem is in the
newPrice_calculate (lastRow)
function. This function retrieves data (sets it as arrays) which was correctly pasted into the sheet, loops through all rows and does math/calendar calculations with cell values using an Add-In("Quantlib")
Public errorMessage as String
Sub newPrice_calculate(lastRow)
Dim Type() As Variant
Dim Id() As Variant
Dim Price() As Variant
Dim daysTo() As Variant
Dim fx() As Variant
Dim interest() As Variant
Dim ObjCalend as Variant
Dim newPrice as Variant
On Error GoTo Catch
interest = Sheets("SQL_DATA").range("V2:V" & lastRow).Value
Type = Sheets("SQL_DATA").range("L2:L" & lastRow).Value Id = Sheets("SQL_DATA").range("M2:M" & lastRow).Value Price = Sheets("SQL_DATA").range("T2:T" & lastRow).Value
daysTo = Sheets("SQL_DATA").range("K2:K" & lastRow).Value
fx = Sheets("SQL_DATA").range("U2:U" & lastRow).Value
qlError = 1
For i = 2 To lastRow
If (i, 1) = "LG" Then
'set something - nothing spectacular like
interest(i, 1) = 0
daysTo(i , 1) = 0
Else
adjTime = Sqr(daysTo(i, 1) / 365)
ObjCalend(i,1) =Application.Run("qlCalendarHolidaysList", _
"CalObj", ... , .... other input parameters)
If IsError(ObjCalend(i,1)) Then GoTo Catch
'other calendar calcs
newPrice(i,1) = Application.Run( 'quantLib calcs)
End If
Catch:
Select Case qlError
Case 1
errorMessage = errorMessage & " QuantLibXL Cal Error at: " & i & " " & vbNewLine & Err.Description
ObjCalend(i,1) (i, 1) = "N/A"
End Select
Next i
Sheets("SQL_DATA").range("AB2:AB" & lastRow).Value = newPrice
'Sheets("SQL_DATA").range("AA2:AA" & lastRow).Value = daysTo
' erase and set to nothing all arrays and objects
Erase Type
Erase id
Erase Price
Set newPrice = Nothing
Is there a possibility to clean everything in:
Private Sub Workbook_BeforeClose(Cancel As Boolean)
End Sub
Thanks in advance
Mark -
I have recently uncovered mysterious cellular usage on three different iPhones. I am a Verizon customer and discovered this by examining the cellular data use logs. What I found are a long series of mysterious data usage logs. I have visited the Genius Bar at my local Apple Store 3 times now to log notes and discuss the issue. It is being escalated.
The characterstics are as follows:
All my family phones have the appropriate IOS and hardware updates (verified by the GeniusBar at my local Apple Store).
This is occuring across three phones, which happen to all be iphone 5. Two are 5 and the other a new 5s. We do have one iphone 4 in the family but the issue (so far as I can tell), is not happening on it.
One iphone 5 has IOS 7, the other IOS 6. The new 5s has of course IOS 7.
Mysterious data use happens even while connected to wifi.
Each mysterious data use log entry is exactly 6 hours apart. For example: 2:18 AM, 8:18 AM, 2:18 PM, 8:18 PM, 2:18 AM ... etc. It cycles AM and PM, same times, for a day to many days until some condition causes it to change (evolve).
The times evolve. One day it could be cycling through at one time series, then it changes to another time sequence and continues until the next condition.
The data usage is anywhere from a few K to many MB. The largest I've seen is over 100 MB.
The logs clearly show these usages are not due to human interaction. It is a program.
If cellular connection is used frequently (by the owner), the pattern is hard to pick out. Luckily, my family member is very good about only using wifi whenever possible, so these mysterious use patterns are easy to pick out.
Verizon allows access to 90 days worth of data logs, so I downloaded and analyzed them. This has been happening for at least 90 days. I have found 298 instances of mysterious use out of 500 total connections to cellular. A total of 3.5 GB of mysterious cellular data has been recorded as used in that 90 days by this phone alone. Only .6 GB of the total cellular data use is legitimate, meaning by the person.
This issue is occuring across three different phones. Two are iPhone 5, and the third is a recently purchased iPhone 5s. The 5s I have not touched beyone the basic startup. I have left it alone on a desk for 3 days, and looking at the logs, the mysterious data use in the same pattern is occuring.
So ... I am speaking to both sides, Verizon and Apple to get answers. Verizon puts their wall up at data usage. It doesn't matter how it is being used, you simply need to pay for it. Yes, the evidence I have gathered is getting closer to someting on Verizon's end.
Before pressing in that direction, I am hoping someone reading this may recognize this issue as a possible iPhone 5 issue ... OR ... you, right now, go look at your data usage logs available through your carrier's web account, and see if you can pick out a pattern of mysterious use. Look especially at the early morning instances when you are most likely sleeping.
I am hoping this is a simple issue that has a quick resolution. I'm looking for the "ooohhh, I see that now ..." But I am starting to think this might be much bigger, but the fact is, most customers rarely or never look at their data usage details, much less discover mysterious use patterns.
The final interesting (maybe frightening part) thing about all this is that I discovered while talking to Verizon ... they do not divulge which direction the data is going. This goes for any use, mysterious or legitimate. Is cellular data coming to your phone, or leaving it? Is something pulling data from your phone? We know that it is possible to build malware apps, but the catch with my issue is that it is also happening on a brand new iphone 5s with nothing downloaded to it.
Thanks for your timeThanks for the replies. It took a while not hearing anything so thought I was alone. I have done many of the suggestions already. The key here is that it occurs on both phones with apps, and phones still packaged in a box.
A Genius Bar supervisor also checked his Verizon data usage log and found the same 6 hour incremental use. Suprisingly, he did not express much intrigue over that. Maybe he did, but did not show it.
I think the 6 hour incremental usage is the main issue here. I spoke with Verizon (again) and they confirmed that all they do is log exactly when the phone connected to the tower and used data. The time it records is when the usage started. I also found out that the time recorded is GMT.
What is using data, unsolicited, every 6 hours?
Why does it change?
Why does it only happen on the iPhone 5 series and not the 4?
Since no one from Apple seems to be chiming in on this, and I have not received the promised calls from Apple tech support that the Genius Bar staff said I was suppose to receive, it is starting to feel like something is being swept under the rug.
I woke up the other day with another thought ... What application would use such large amounts of data? Well ... music, video, sound and pictures of course. Well ... what would someone set automatically that is of any use to them? hmmm ... video, pictures, sound. Is the iPhone 5 succeptible to snooping? Can an app be buried in the IOS that automatically turns on video and sound recording, and send it somewhere ... every 6 hours? Chilling. I noted that the smallest data usage is during the night when nothing is going on, then it peaks during the day. The Genius Bar tech and I looked at each other when I drew this sine wave graph on the log print outs during an appointment ... -
T410s with extremely poor performanc​e and CPU always near 100% usage
Hi,
I've had my T410s for almost a year now and lately its been starting to get extremely slow, which is odd since it used to be so fast.
Just by opening one program, Outlook, or IE or Chrome, just one window, it will start to get extremely slow and on task manger I can clearly see the CPU usage at 100% or very near.
Also another thing I noticed was the performance index on the processor dropped from 6.8 to 3.6!!!
I did a clean Windows 7 Pro 64Bit install, I have all Windows and Lenovo updates installed and the problem persits
What is happening and what should I do??
Thank you
Nunothank you again for your answer,
I've done that but hard drive health is also ok as you can see from the screenshot.
IAt the time of the screenshot I had just fineshed to boot up the laptop and it wasnt hot at all.
I am getting desperate here, I am unable to work with my T410s
Link to picture
Thank you
Nuno
Moderator note; picture(s) totalling >50K converted to link(s) Forum Rules -
High CPU usage while running a java program
Hi All,
Need some input regarding one issue I am facing.
I have written a simple JAVA program that lists down all the files and directories under one root directory and then copies/replicates them to another location. I am using java.nio package for copying the files. When I am running the program, everything is working fine. But the process is eating up all the memories and the CPU usage is reaching upto 95-100%. So the whole system is getting slowed down.
Is there any way I can control the CPU usage? I want this program to run silently without affecting the system or its performance.Hi,
Below is the code snippets I am using,
For listing down files/directories:
static void Process(File aFile, File aFile2) {
spc_count++;
String spcs = "";
for (int i = 0; i < spc_count; i++)
spcs += "-";
if(aFile.isFile()) {
System.out.println(spcs + "[FILE] " + aFile2.toURI().relativize(aFile.toURI()).getPath());
String newFile = dest + aFile2.toURI().relativize(aFile.toURI()).getPath();
File nf = new File(newFile);
try {
FileCopy.copyFile(aFile ,nf);
} catch (IOException ex) {
Logger.getLogger(ContentList.class.getName()).log(Level.SEVERE, null, ex);
} else if (aFile.isDirectory()) {
//System.out.println(spcs + "[DIR] " + aFile2.toURI().relativize(aFile.toURI()).getPath());
String newDir = dest + aFile2.toURI().relativize(aFile.toURI()).getPath();
File nd = new File(newDir);
nd.mkdir();
File[] listOfFiles = aFile.listFiles();
if(listOfFiles!=null) {
for (int i = 0; i < listOfFiles.length; i++)
Process(listOfFiles, aFile2);
} else {
System.out.println(spcs + " [ACCESS DENIED]");
spc_count--;
for copying files/directories:public static void copyFile(File in, File out)
throws IOException {
FileChannel inChannel = new
FileInputStream(in).getChannel();
FileChannel outChannel = new
FileOutputStream(out).getChannel();
try {
inChannel.transferTo(0, inChannel.size(),
outChannel);
catch (IOException e) {
throw e;
finally {
if (inChannel != null) inChannel.close();
if (outChannel != null) outChannel.close();
Please let me know if any better approach is there. But as I already said, currently it's eating up the whole memory.
Thanks in advance.
Maybe you are looking for
-
Can I scan to PDF using Adobe Reader X?
I want Reader to manage my scans. Can I scan to PDF using Reader X, or is that only a feature in Acrobat. If so, how can I use it as the default manager for my scanner. I am using a Fujitsu FI-6140.
-
Cfselect data binding with statically defined options
I am having some trouble getting statically defined options to remain in my cfselects when I use binding in CF8 I have four cfselects: Continent, Country, State/Province, County The continent select is populated by a list in a database, and each subs
-
Simulating DNG Profile changes in PSD file
Hello everyone, I have a RAW file and applied Adobe's default DNG Profile (via the Camera Calibration panel in Lightroom 2.6) and converted it to a PSD file so I can retouch the photo via Photoshop CS3. After hours of retouching, I now want to apply
-
Hi, I am following the example provided in HDI-TechNet-SystemCenter-winvideo-PowerShellRuleMonitor.wmv to build a custom monitor in 2007R2 Auth Console. The difference is I am using a VB and not PS script. The first deviation is I don't see a Trigge
-
Hello, I am using Cisco Unified Intelligence Center (CUIC), packaged with UCCX 10.5. I find a Unified CCX Live Data "Real Time Wallboard", I cannot run it, even I cannot find any information in Admin Guide and Design. Could anybody help me, how to us