Short Dump TSV_TNEW_PAGE_ALLOC_FAILED while using shared memory objects
Hi Gurus,
We are using shared memory objects to stor some data which we will be reading later. I have implemented the interfce IF_SHM_BUILD_INSTANCE in root class and using its method BUILD for automatic area structuring.
Today our developments moved from dev system to quality system, and while writing the data into the shared memory using the methods ATTACH_FOR_WRITE and DETACH_COMMIT in one report. We started getting the run time error TSV_TNEW_PAGE_ALLOC_FAILED.This is raised when the method DETACH_COMMIT is called to commit the changes in the shared memory.
Everyhting works fine before DETACH_COMMIT. I know that it is happening since the program ran out of extended memory, but I am not sure why it is happening at DETACH_COMMIT call. If excessive memory is being used in the program, this run time error should have been raised while calling the ATTACH_FOR_WRITE method or while filling the root class attributes. I am not sure why it is happening at DETACH_COMMIT method.
Many Thanks in advance.
Thanks,
Raveesh
Hi raveesh,
as Naimesh suggested: Probably system parameter for shared memory area is too small. Compare the system parameters in devel and QA, check what other shared memory areas are used.
Regarding your question, why it does not fail at ATTACH_FOR_WRITE but then on DETACH_COMMIT:
Probably ATTACH_FOR_WRITE will set an exclusive write lock on the shared memory data, then write to some kind of 'rollback' memory and DETACH_COMMIT will really put the data into shared memory area and release the lock. The 'rollback' memory is in the LUW's work memory which is much bigger as the usual shared memory size.
This is my assumption - don't know who can verify or reject it.
Regards,
Clemens
Similar Messages
-
I created an editable ALV field and when i changed the value in that field and click tab or enter, then i am getting a shortdump. This is not happening regularly.
can anyone let me know the reason for this.
Sharath.Hi
Set the handlers for the events DATA_CHANGED and DATA_CHANGED_FINISHED for your grid and handles changes in these methods accordingly.
*...local class definition
for event data_changed of cl_gui_alv_grid
importing er_data_changed,
handle_data_changed_finished
for event data_changed_finished of cl_gui_alv_grid
*...display the grid
call method grid1->set_table_for_first_display
exporting
i_bypassing_buffer = abap_true
i_structure_name = viewname
is_print = gs_print
is_layout = gs_layout
it_toolbar_excluding = i_exclude
i_save = 'A'
changing
it_outtab = <i_itab>
it_fieldcatalog = gt_fieldcat.
if sy-subrc ne 0.
exit.
endif.
set handler handle_double_click
handle_button_click
handle_user_command
handle_data_changed
handle_data_changed_finished
handle_onf4
handle_toolbar
for grid1.
regards
Isaac Prince -
Question on use of shared memory objects during CIF executions
We have a CIF that runs in background via program RIMODACT that is invoked from our external job scheduler. (The schedulere kicks off a job - call it CIFJOB - and the first step of this job executes RIMODACT.)
During the execution of RIMODACT, we call a BAdI (an implementation of SMOD_APOCF005.)
In the method of this BAdI, we load some data into a shared memory object each time the BAdI is called. (We create this shared memory object the first time the BAdI is called.)
After program RIMODACT finishes, the second step of CIFJOB calls a wrapper program that calls two APO BAPI's.
Will the shared memory object be available to these BAPIs?
Reason I'm asking is that the BAPIs execute on the APO app server, but the shared memory object was created in a CIF exit called from a program executing on the ECC server (RIMODACT).
Edited by: David Halitsky on Feb 20, 2008 3:56 PMI know what you're saying, but it doesn't apply in this case (I think.)
The critical point is that we can tie the batch job to one ECC app server. In the first step of this job (the one that executes RIMODACT to do the CIF), we build the itab as an attribute of the "root" shared memory object class.
In the second step of the batch job, we attach to the root class we built in the first step, extract some data from it, and pass these data to a BAPI that we call on the APO server. (This is what I meant by a "true" RFC - the APO BAPI on the APO server is being called from a program on the ECC server.)
So the APO BAPI never needs access to the ECC shared memory object - it gets its data passed in from a program on the ECC server that does have access to the shared memory object.
Restated this way, is the solution correct ??? -
Short Dump TSV_TNEW_PAGE_ALLOC_FAILED
Hi All,
I am facing the short dump "TSV_TNEW_PAGE_ALLOC_FAILED" problem in my PRD system.
Please find ST22 log and suggest the solution:
Runtime Errors TSV_TNEW_PAGE_ALLOC_FAILED
Date and Time 18.11.2009 12:12:09
Short text
No more storage space available for extending an internal table.
What happened?
You attempted to extend an internal table, but the required space was
not available.
What can you do?
Note which actions and input led to the error.
For further help in handling the problem, contact your SAP administrator
You can use the ABAP dump analysis transaction ST22 to view and manage
termination messages, in particular for long term reference.
Try to find out (e.g. by targetted data selection) whether the
transaction will run with less main memory.
If there is a temporary bottleneck, execute the transaction again.
If the error persists, ask your system administrator to check the
following profile parameters:
o ztta/roll_area (1.000.000 - 15.000.000)
Classic roll area per user and internal mode
usual amount of roll area per user and internal mode
o ztta/roll_extension (10.000.000 - 500.000.000)
Amount of memory per user in extended memory (EM)
o abap/heap_area_total (100.000.000 - 1.500.000.000)
Amount of memory (malloc) for all users of an application
server. If several background processes are running on
one server, temporary bottlenecks may occur.
Of course, the amount of memory (in bytes) must also be
available on the machine (main memory or file system swap).
Caution:
The operating system must be set up so that there is also
enough memory for each process. Usually, the maximum address
space is too small.
Ask your hardware manufacturer or your competence center
about this.
In this case, consult your hardware vendor
abap/heap_area_dia: (10.000.000 - 1.000.000.000)
Restriction of memory allocated to the heap with malloc
for each dialog process.
Parameters for background processes:
abap/heap_area_nondia: (10.000.000 - 1.000.000.000)
Restriction of memory allocated to the heap with malloc
for each background process.
Other memory-relevant parameters are:
em/initial_size_MB: (35-1200)
Extended memory area from which all users of an
application server can satisfy their memory requirement.
or analysis
The internal table "\FUNCTION-POOL=EL40\DATA=GL_NODETAB[]" could not be further
extended. To enable
Error handling, the table had to be delete before this log was written.
As a result, the table is displayed further down or, if you branch to
the ABAP Debugger, with 0 rows.
At the time of the termination, the following data was determined for
the relevant internal table:
Memory location: "Session memory"
Row width: 2160
Number of rows: 1782088
Allocated rows: 1782088
Newly requested rows: 4 (in 1 blocks)
to correct the error
The amount of storage space (in bytes) filled at termination time was:
Roll area...................... 4419712
Extended memory (EM)........... 2002743520
Assigned memory (HEAP)......... 2000049152
Short area..................... " "
Paging area.................... 32768
Maximum address space.......... " "
If the error occures in a non-modified SAP program, you may be able to
find an interim solution in an SAP Note.
If you have access to SAP Notes, carry out a search with the following
keywords:
"TSV_TNEW_PAGE_ALLOC_FAILED" " "
"SAPLEL40" or "LEL40U11"
"ISU_ELWEG_HIERARCHY_BUILD"
Please help me out to resolve the issue.
Regards,
Nitin SharmahI cHANDRU,
tHANKS FOR YOUR RESPONSE:
pLEASE FIND BELOW MENTIONED DETALS:
Operating system..... "Windows NT"
Release.............. "5.2"
Hardware type........ "8x AMD64 Level"
Character length.... 16 Bits
Pointer length....... 64 Bits
Work process number.. 0
Shortdump setting.... "full"
Database server... "SVPSAPECP01"
Database type..... "MSSQL"
Database name..... "ECP"
Database user ID.. "ecp"
Char.set.... "C"
SAP kernel....... 700
created (date)... "Nov 18 2008 22:53:36"
create on........ "NT 5.2 3790 Service Pack 1 x86 MS VC++ 14.00"
Database version. "SQL_Server_8.00 "
Patch level. 185
Patch text.. " " -
Enhanced protected mode and global named shared memory object
Good morning.
I've written a bho that do data exchange with a system service. The service creates named shared memory objects in the Global namespace. Outside appcontainer IE 11 sandboxed everything works fine lowering objects integrity level. Inside the sandboxed environment
OpenFileMappingW seems to return a valid handle but the calls to MapViewOfFile always gives access denied. What i'm missing? Thank you.
Service code for security descriptor creation:
if (InitializeSecurityDescriptor(pSA->lpSecurityDescriptor, SECURITY_DESCRIPTOR_REVISION))
if (ConvertStringSecurityDescriptorToSecurityDescriptorW(L"D:P(A;;GA;;;WD)(A;;GA;;;AC)", SDDL_REVISION_1, &pSecDesc, NULL) == TRUE)//
BOOL fAclPresent = FALSE;
BOOL fAclDefaulted = FALSE;
if (GetSecurityDescriptorDacl(pSecDesc, &fAclPresent, &pDacl, &fAclDefaulted) == TRUE)
bRetval = SetSecurityDescriptorDacl(pSA->lpSecurityDescriptor, TRUE, pDacl, FALSE);
if (bRetVal ==TRUE && ConvertStringSecurityDescriptorToSecurityDescriptorW(L"S:(ML;;NW;;;LW)", SDDL_REVISION_1, &pSecDesc, NULL) == TRUE)
BOOL fAclPresent = FALSE;
BOOL fAclDefaulted = FALSE;
if (GetSecurityDescriptorSacl(pSecDesc, &fAclPresent, &pSacl, &fAclDefaulted) == TRUE)
bRetval = SetSecurityDescriptorSacl(pSA->lpSecurityDescriptor, TRUE, pSacl, FALSE);
OutputDebugStringW(L"SACL ok.");
return bRetval;
BHO code
LPMEMORYBUFFER OpenDataChannel(HANDLE *hQueue)
LPMEMORYBUFFER lp = NULL;
WCHAR data[512] = { 0 };
for (int a = 0;; a++)
if(iestatus==FALSE)StringCchPrintfW(data, 512,L"Global\\UrlfilterServiceIE.%d", a);//NOT in EPM
else StringCchPrintfW(data, 512, L"%s\\Global\\UrlfilterServiceIE.%d",appcontainernamedobjectpath, a);//in EPM
*hQueue = OpenFileMappingW(FILE_MAP_ALL_ACCESS, TRUE, data);//FILE_MAP_ALL_ACCESS
if (*hQueue != NULL)
//file mappato esistente
lp = (LPMEMORYBUFFER)MapViewOfFile(*hQueue, FILE_MAP_ALL_ACCESS, 0, 0, sizeof(MEMORYBUFFER));//FILE_MAP_ALL_ACCESS
if (lp != NULL)Ciao Ritchie, thanks for coming here. ;-)
I call (only) OpenFileMapping and MapViewOfFile inside my code and i get access denied at the first try. As i stated before this happens when IE11 is working inside EPM 32/64bit mode, outside EPM it works as it should. However i decided to take a different
approach to do IPC, because, you want it or not, the service is up and running as SYSTEM... Still i'm really interested about two points:
1) can i use global kernel objects in EPM mode?
2) if one is "yes, you can": what's wrong with my code? The security descriptor? Something else?
Thanks all. -
Dumping core while using rwFactory()- create( )
hi Guys!!!
I'm not sure if this is the right forum... but please help..
Please have a look at the code.
int main(int argc, char *argv[])
std::cout << "Begining of the program .. " << endl;
RWBag dummy; //might have to create a dummy RWBAG to link the correct code;
std::cout << "This program creates an RWBAG using theFactory"
<< std::endl;
RWBag* b=(RWBag*)(getRWFactory()->create(__RWBAG));
b->insert( new RWCollectableDate ); // Insert today's date
b->clearAndDestroy(); // Cleanup: first delete members,
delete b; // then the bag itself
return 1;
I am using Roguewave version 8 on Solaris 10.
The problem is the prg is dumping core while creating the RWCollectable object using (getRWFactory()->create(__RWBAG));
I'm not sure if it the type casting or in the create itself.
Thanks,
Rahul.It may be that an exception is being thrown that your not catching. Coherence exceptions need to caught as Exception::View not Exception::Handle. If there is an exception it's text should help identify if there was an issue running the updated server side.
Mark
Oracle Coherence -
Getting short dump "TSV_TNEW_PAGE_ALLOC_FAILED" during the load
Hi Experts,
I am getting short dump "TSV_TNEW_PAGE_ALLOC_FAILED" when loading data one ODS to Two cubes in 3.1 system . we have only 12000 records to load. this load is delta update. daily we loaded 14000 record from this load but today we are getting short dump.
Short Dump : TSV_TNEW_PAGE_ALLOC_FAILED
Description : No storage space available for extending the internal table.we attempted to extend an internal table, but the required space wasnot available.
ThanksThis is a memory issue whereby an internal table requires more memory than what is currently available. If you're executing this during processing of other ETL, then your memory is being consumed by all of the processes and you would need to change your schedule as to balance the load better.
Another possibility is that you have an extremely inefficient SQL statement in a routine that is causing the memory to be overly consumed. Even though the output may be less than average, there is a possiblity that it's reading more data in a SELECT statement and therefore requires more memory than normal.
Finally, have you Basis team look at this issue to determine if there's anything that they can do to resolve it. -
Issue while using SUNOPSIS MEMORY ENGINE (High Priority)
Hi Gurus,
While using SUNOPSIS MEMORY ENGINE to generate a .csv file using the database table as a source it is throwing an error in the operator like.
ODI-1228: Task SrcSet0 (Loading) fails on the target SUNOPSIS ENGINE connection SUNOPSIS MEMORY ENGINE.
Caused By: java.sql.SQLException: unknown token
(LKM used : LKM Sql to Sql.
IKM used : IKM Sql to File Append.)
can you please help me regarding this ASAP as it has became the show stopper for me to proceed further.
Any Help will be greatly Appreciable.
Many Thanks,
Pavan
Edited by: Pavan. on Jul 11, 2012 10:22 AMHi All,
The Issue got resolved successfully.
The solution is
we need to change the E$_,I$_,J$_,...... to E_,I_,J_,.... ((i.e; removing the '$' Symbol)) in the PHYSICAL SCHEMA of SUNOPSIS MEMORY ENGINE as per the information given below.
When running interfaces and using a XML or Complex File schema as the staging area, the "Unknown Token" error appears. This error is caused by the updated HSQL version (2.0). This new version of HSQL requires that table names containing a dollar sign ($) are surrounded by quotes. Temporary tables (Loading, Integration, and so forth) that are created by the Knowledge Modules do not meet this requirement on Complex Files and HSQL technologies.
As a workaround, edit the Physical Schema definitions to remove the dollar sign ($) from all the Work Tables Prefixes. Existing scenarios must be regenerated with these new settings.
It worked fine for me.
Thanks ,
Pavan Kumar -
Using Shared Memory in LabVIEW
I'm trying to use shared memory with LabVIEW. Can I use it, a DLL in C with LabWIEW for use shared Memory?
Lidia,
Check these out (for memory mapping):
http://exchange.ni.com/servlet/ProcessRequest?RHIVEID=101&RPAGEID=135&HOID=5065000000080000006A1D0000&UCATEGORY_0=_318_&UCATEGORY_S=0&USEARCHCONTEXT_QUESTION_0=build+cvi+shared+dll&USEARCHCONTEXT_QUESTION_S=0
http://exchange.ni.com/servlet/ProcessRequest?RHIVEID=101&RPAGEID=135&HOID=5065000000080000005BC10000&UCATEGORY_0=_49_%24_6_&UCATEGORY_S=0&USEARCHCONTEXT_QUESTION_0=Communicating+Between+Built+LV+App&USEARCHCONTEXT_QUESTION_S=0
But in general you don't need to use this when you use dll's. It is used to
share data between different processes. If you need LabVIEW data in a dll,
try to pass it as a pointer to an array, or as a string pointer.
Regards,
Wiebe.
"lidia" wrote in message
news:506500
[email protected]..
> I'm trying to use shared memory with LabVIEW. Can I use it, a DLL in C
> with LabWIEW for use shared Memory? -
Hi folks,
This the first time I use shared memory and my question is:
Does shmat function attache the segment to the same address in diffrent procsses, in anther word can I use the same pointer in process A and B?
ThanksThe issue of alignment is rather tricky.
shmat(2) may perfectly return you misaligned address, so I'd consider using memory-mapped files instead.
mmap(2) returns page-aligned memory (unless you specify MAP_FIXED and some weired first parameter), so you may rely further on the compiler to do the alignment for you... -
Music app crashes while using shared library
On my ipad 2 while using shared library every time I search the app crashes and most times when I try to play music on the music app via shared, only a handful of the artists show up and album art is all wrong. I tried it with two different ipad 2's thus it's itunes on my Mac Mini..
I've tried:
1. Reset ipad (didn't work)
2. Signed out of ipad music shared, shut app down, sign back in. Most times I'll get all my artists and album art back but when I try to search it crashes, then once I reopen the artists are back down to a handful as opposed to hundreds. Search again, crashes immediately.
3. I converted all music in my library to the newest ID tags (didn't work)
4. I deleted both library files, (didn't work)
5. I started over completely, deleted my library files and media, re imported, changed all ID tags to newest version (didn't work)
6. Both ipads and computer and itunes are fully up to date
7. I re synced both ipads by plugging them into the computer, deleted all music (didn't' work)
This has been going on a long time, even under itunes 10, I'm out of options at this point.
I have 8842 tracks at 66.76 gb.Unfortunately, I gave up and didn't find a solution. However, it is, as of today, working properly.
Sorry, wish I had a good solution for you. -
I want to used shared memory in LabVIEW. I think I can do it using a DLL in C.
I think I can use shared memory with a DLL in C. But, Can I use some utility included in LabVIEW to do that, without include my DLL?
Jorge M. wrote:
> Hello,
>
> here's the info. It works.
>
> http://exchange.ni.com/servlet/ProcessRequest?RHIVEID=101&RPAGEID=135&HOID=5065000000080000006A1D0000&UCATEGORY_0=_318_&UCATEGORY_S=0&USEARCHCONTEXT_QUESTION_0=build+cvi+shared+dll&USEARCHCONTEXT_QUESTION_S=0
Another one:
http://exchange.ni.com/servlet/ProcessRequest?RHIVEID=101&RPAGEID=135&HOID=5065000000080000005BC10000&UCATEGORY_0=_49_%24_6_&UCATEGORY_S=0&USEARCHCONTEXT_QUESTION_0=Communicating+Between+Built+LV+App&USEARCHCONTEXT_QUESTION_S=0
Rolf Kalbermatter
Rolf Kalbermatter
CIT Engineering Netherlands
a division of Test & Measurement Solutions -
Short dump TSV_TNEW_PAGE_ALLOC_FAILED when import SAPKB70016
Hi all,
I´m trying to import the support package SAPKB70016 im my QAS system and I got an error. The import stop on phase XPRA_EXECUTION and I saw at the tcode sm37 that there is a job running with the name RDDEXECL. This job is canceled with the dump TSV_TNEW_PAGE_ALLOC_FAILED. I already changed some parameters and also I applied some notes but I can´t solve this issue.
Parameter changed Before After
ztta/roll_area 30000000 100000000
ztta/roll_extension 4000317440 8000000000
abap/heap_area_dia 2000683008 4000683008
abap/heap_area_nondia 2000683008 4000683008
abap/heap_area_total 2000683008 4000683008
em/initial_size_MB 392 1024
abap/shared_objects_size_MB 20 150
es/implementation map std
JOB LOG:
Job started
Step 001 started (program RDDEXECL, variant , user ID DDIC)
All DB buffers of application server FQAS were synchronized
ABAP/4 processor: TSV_TNEW_PAGE_ALLOC_FAILED
Job cancelled
ST22 LOG:
Memory location: "Session memory"
Row width: 510
Number of rows: 0
Allocated rows: 21
Newly requested rows: 288 (in 9 blocks)
Last error logged in SAP kernel
Component............ "EM"
Place................ "SAP-Server FQAS_QAS_01 o
Version.............. 37
Error code........... 7
Error text........... "Warning: EM-Memory exhau
Description.......... " "
System call.......... " "
Module............... "emxx.c"
Line................. 1897
The error reported by the operating system is:
Error number..... " "
Error text....... " "
The amount of storage space (in bytes) filled at termination time was:
Roll area...................... 99661936
Extended memory (EM)........... 8287273056
Assigned memory (HEAP)......... 1376776176
Short area..................... " "
Paging area.................... 49152
Maximum address space.......... 18446743890583112895
If the error occures in a non-modified SAP program, you may be able to
find an interim solution in an SAP Note.
If you have access to SAP Notes, carry out a search with the following
keywords:
"TSV_TNEW_PAGE_ALLOC_FAILED" " "
"CL_ENH_UTILITIES_XSTRING======CP" or "CL_ENH_UTILITIES_XSTRING======CM008"
"GET_DATA"
Now, I don´t know what I can do to solve this problem.
Can you help me?
ThanksHi all,
Gagan, I already changed my parameters according to the above post. I increased this parameters until maximum allowed but the dump still persists.
Bhuban
In this server I have 16GB RAM and 600GB HD.
total used free shared buffers cached
Mem: 16414340 4973040 11441300 0 454436 3572592
-/+ buffers/cache: 946012 15468328
Swap: 20479968 0 20479968
Size Used Avail Use% Mounted on
441G 201G 218G 48% /oracle
20G 6.5G 12G 36% /sapmnt
25G 21G 2.7G 89% /usr/sap/trans
25G 8.8G 15G 39% /usr
20G 14G 5.1G 73% /
Anil, I already stop my appl and my db, I rebooted my OS too and after i tried again, no success.
What else can i do?
Thanks for all. -
Short Dump error while loading data from R/3 to ODS
Hello,
while trying to load data into the ODS from R/3 I get the following short dump error message. How do I carry out step 1 in the below procedure. Where do i find the Activate Function. And idea?
Thanks,
SD
Diagnosis
Form routine CONVERT_ITAB_RFC is incorrect in transfer program
GP4C0LOLZ6OQ70V8JR365GWNW3K .
System Response
The IDoc processing was terminated and indicated as incorrect. The IDoc
can be reimported at any time.
Procedure
1. Go to the transfer rule maintenance for your InfoSource
ZFIN_TR_FLQITEM_FI and the source system DA_M_10 and regenerate the
transfer program using the function Activate. Remove possible syntax
errors on the basis of your conversion routines.
2. Restart the processing of this IDoc.
3. If the error occurs again search for SAPNet R/3 notes, and create a
problem message if necessary.
Edited by: Sebastian D'Souza on Jan 13, 2009 3:22 PMHi.......
Goto RSA1>> then to source system tab (on left side) >> double click on the desired source system...........Then on the right side you have the Datasource tree..........there search the datasource and activate.......Also u can Replicate the datasource again.......and activate the Transfer rules using the program : RS_TRANSTRU_ACTIVATE_ALL
After this operation when come back to source system (R/3).........I think the error log line will be disappeared from SM58......... Then repeat the load.........
Also u can try to Activate the infosource once...........in RSA1.........before repeating the load....
Hope this helps......
Regards,
Debjani.... -
Short Dump "ITAB_DUPLICATE_KEY" while executing DTP
Hi all,
i am getting a short dump when i try to execute the DTP to Cube, following is the error details.
i could not analyze the reason, need your inputs on this.
Runtime Errors ITAB_DUPLICATE_KEY
Date and Time 02/08/2009 21:33:53
Short dump has not been completely stored (too big)
Short text
A row with the same key already exists.
What happened?
Error in the ABAP Application Program
The current ABAP program "SAPLRSAODS" had to be terminated because it has
come across a statement that unfortunately cannot be executed.
Error analysis
An entry was to be entered into the table
"\FUNCTION=RSAR_ODS_GET\DATA=L_TH_ISOSMAP" (which should have
had a unique table key (UNIQUE KEY)).
However, there already existed a line with an identical key.
The insert-operation could have ocurred as a result of an INSERT- or
MOVE command, or in conjunction with a SELECT ... INTO.
The statement "INSERT INITIAL LINE ..." cannot be used to insert several
initial lines into a table with a unique key.
thanks,
RkHello Rk,
What is the data source??
whats is your SP level??
Seems like the data source is not properly activated or duplicate entries exists in the system tables for the same data source...
If you have recently done a system copy??
Try to activate the data source again...if its from R/3 then replicate and activate it again...also if its a BW object then try to reactivate it as well.
Thanks
Ajeet
Maybe you are looking for
-
Hello all, I can not make substitution in the field of value date (bseg-valut). I use the typical ggb1. Do you have any clue if there is any special condition with this field? Thank you, Dimitris Edited by: dimit7 on Jul 12, 2011 12:58 PM
-
Is this spec possible??
Hi I came across this specification in eBay:- iBook G4 1.2GHz processor 14.1" screen 640MB RAM 40GB hard drive I've checked in specifications, the ram and hard drive sizes seem wrong for the processor speed. regards Steve
-
Tecra S3 (PTS30E-01K014FR) - Where can I found my latest BIOS?
Hey ppl's i am back x) I got the same problem of some guy there... I can't install vista becose I need the 1.40 upgrade for Tecra S3 PTS30E-01K014FR. Hmmm where can I found it on Toshiba site? I tryed download driver section by search with category b
-
Sometimes when I save a JPEG as a TIFF, it creates a file that shows a file size of say 3MB, but sometimes a JPEG of similar dimensions and resolution will come out as a TIFF that is in the range of 20-30MB. What might be making that difference in th
-
I am sure others have had this problem-- when I try to open the .acsm file for an ebook in the most recent version of Adobe Digital Editions I get this message and it won't open: Error getting License. License Server Communication Problem: E_ADEPT_DO