D010TAB table too big

Hello,
I´ve found on ECC 6.0 (netweaver 2004s), that the largest table on the system (currently 10GB an 44M rows) is table D010TAB.
Is there any specific maintenance that should be done on this table? Is it a normal beheaviour that is table is that big? There is no issue at the moment with this table, but i suspect that if it keeps growing it will have a negative performance impact on the system.
From my previous systems, I never found that this table is among the top 10 biggest tables on DB02.

Hi José
Did you find the solution for this problem?
We have the same problem.
Best Regards,
Fábio Karnik Tchobnian

Similar Messages

  • Index is not used (why and what should i do) table too big

    hi guys,
    having a headache now. (on 10gR2 on linux system)
    I used to have a large table and this is how i managed it. (i am on standard edition (thus i have no partitioning)
    i would have a
    1) cron job run daily to keep track of the lowest and highest ID of the table PER MONTH (inserted into a table)
    2) below is the table
    CDR     MONTH     01-AUG-11 00:00:00     94118236     97584656
    CDR     MONTH     01-SEP-11 00:00:00     97581362     100573669
    CDR     MONTH     01-OCT-11 00:00:00     100570865     103631203
    CDR     MONTH     01-NOV-11 00:00:00     103629760     106497084
    CDR     MONTH     01-DEC-11 00:00:00     106494085     107306335
    so as you can see, for the month of dec for example, the lowest CDR is 10649408 and the highest is 107306335.
    3) so everytime i write a sql on this table if i want to get data for NOV
    select * from cdr where ID >= get_cdr_id('CDR','NOV','SMALLEST');
    and it would be
    select * from cdr where ID >= 103629760;
    In the past, the index will be used (index range scan), but now the optimiser is doing a FTS and is taking too much time.
    therefore i went to force the index
    select /*+INDEX(C, CDR_PK)*/* from cdr C where cdr_id > 103629760 and cdr_id < 106497084;
    and the COST -> 1678927 (CDR, INDEX_ROWID)
    13158 (CDR_PK, RANGE SCAN)
    -- the result return is fast and good.
    without the index
    select * from cdr C where cdr_id > 100570865 and cdr_id < 103631203
    the COST -> 440236 (CDR,FULL)
    -- but the result return is slow like anything
    My question is
    1) which cost should i look at ?
    the one with index hint have much higher COST, but the results return are so FAST.
    1678927 (with hint) compare with 440236 (FTS)
    2) is using the index is the correct way, why isnt my optimiser using it ?! how do i make the optimiser used it ?
    Regards,
    Noob

    Iordan Iotzov wrote:
    About the clustering factor– I think there are two possibilities:
    1) The “clustering factor” is generally correct, i.e. it represents the true value of how the data is “scattered” through the table. The clustering factor for the fragment of data you are interested in (where cdr_id > 100570865 and cdr_id < 103631203), however, is different (smaller).
    2)Oracle did not compute the “clustering factor” correctly. It is not uncommon occurrence, particularly in 10g. This blog entry by Randolf Geist is quite informative - http://oracle-randolf.blogspot.com/2010/01/clusteringfactor-what-if-analysis.html. Comprehensive information on that topic is available in Jonathan Lewis’ book Cost Based Oracle Fundamentals (chapter 5).
    You can make the index more attractive by setting lower “clustering factor” value using DBMS_STATS.SET_INDEX_STATS. If you choose to do that, you should review your whole stats gathering procedure, because the next gathering would most likely reset the “clustering factor” to its old (high) value.
    Iordan Iotzov
    http://iiotzov.wordpress.com/

  • Table too big to print

    This is my first time using numbers so im having a bit of trouble. I just made a table with 23 countries and their IDP's and when I made it into a single column chart the chart only showed a few countries so I expanded it to make them all showing. Only thing is when I expanded it it made it like 5 pages long if I wanted to print it. So I made the countries on the x-axis vertical and it helped a little but its still too long. How can I make a graph that will fit on one page to be printed??

    First, size your chart as you need to so everything shows as you like it to.
    At the left of the toolbar is an icon for "view". Click it and select Show Layout. This will show you how your pages will look when printed. In the layout view there is a slider at the bottom for "content size". Slide it to the left to make your chart (and table) smaller. Slide it until your chart fits on one page. Your table may be really small at this point. If it is too small, select the table and drag the selection box to enlarge it. If necessary, put the table on one page and the chart on another.

  • I'm trying to open a 900kb Word doc (240pages) in Pages but get this error message:  Import Warning - A table was too big. Only the first 40 columns and 1,000 rows were imported.

    I'm trying to open a 900kb Word doc (240pages) in Pages but get this error message:  Import Warning - A table was too big. Only the first 40 columns and 1,000 rows were imported.

    Julian,
    Pages simply won't support a table with that many rows. If you need that many, you must use another application.
    Perhaps the originator could use a tabbed list rather than a table. That's the only way you will be able to contain a list that long in Pages. You can do the conversion yourself if you open the Word document in LibreOffice, Copy the Table, Paste the Table into Numbers, Export the Numbers doc to CSV, and import to Pages. In Pages Find and Replace the Commas with Tabs.
    There are probably other ways, but that's what comes to mind here.
    Jerry

  • File Too Big to Open?

    I have a ~26 MB CSV file, which I had wanted to open in Numbers 09. But when I did, Numbers displayed the Opening dialog box, and then said that the file can't be opened because it was too big. How can I work around this and open the CSV file?

    CSV is a text file, so if it's a spreadsheet, it likely exceeds the 64K row limit for a Numbers table.
    You might be able to open the file in TextEdit, then split it into smaller pieces which you will be able to open.
    Or it may be possible to open the file using either NeoOffice or OpenOffice.org, both of which can be downloaded from their respective websites (linked to the names in this line).
    Make a copy of the file to try these with.
    Regards,
    Barry

  • XML formatted Email Attachment created by CL_IXML is too big to be received

    Dear all,
    I followed the excellent web blog http://wiki.sdn.sap.com/wiki/display/Snippets/FormattedExcelasEmailAttachment
    to pass an internal table (2MB in size, 18 columns * 6300 rows) to CL_IXML DOM and send this XML file as an formatted Excel attachment, it perfectly works, but the size of this XML attachment exceeded 8 MB, although I reduced it to 6MB successfully by simplying its Cell Attribute from
    <Cell ss:StyleID="s62"><Data ss:Type="String">1E2</Data></Cell>
    TO                    <Cell*><Data ss:Type="String">1E2</Data></Cell>.
    but I cannot do the same on its Data Attribute, and it is still too big to be received by the mailbox of my third party users, is there any way to reduce it below to 3MB?
    we need the attachment to be formatted Excel, and cannot request my third party receiver to do anything in Excl.
    I am using ECC 6.0, 32 bit no-unicode, and SAP Gui 7.20.
    looking forwarding to your reply
    Peter Ding

    You can try zipping the file and send it as attachment.
    Use the FM SCMS_XSTRING_TO_BINARY to convert the XML to Binary. Do the ZIP using the method COMPRESS_BINARY of the class CL_ABAP_GZIP.
    Regards,
    Naimesh Patel

  • ODI-40406: Bytes are too big for array error

    Hello ,
    I am trying to load a flat file from a table. I am working on ODI 11G.
    Following are performed.
    Staging area different from SOurce is checked.
    IKM used is : IKM SQL to File Append
    Truncate is True
    No LKM is used.
    I am getting following error:
    java.lang.IllegalArgumentException: ODI-40406: Bytes are too big for array
    create header (RECORDTYPE,
    ASSIGN_NUM,
    USR_ID,
    START_TIME,
    JOBCODEID,
    AISLE_AREA_ID,
    PLANE_DATE,
    CLIENT_ID,
    CSTNUM,
    WH_ID)
    /*$$SNPS_START_KEYSNP$CRDWG_TABLESNP$CRTABLE_NAME=UNITIME TO RPSNP$CRLOAD_FILE=C:\Program Files\ODI_BI_APPS/UNITIME TO RP.txtSNP$CRFILE_FORMAT=FSNP$CRFILE_SEP_FIELD=0x0009SNP$CRFILE_SEP_LINE=0x000D0x000ASNP$CRFILE_FIRST_ROW=0SNP$CRFILE_ENC_FIELD=SNP$CRFILE_DEC_SEP=SNP$CRSNP$CRDWG_COLSNP

    There is a possibility of mismatch of datatype, that can cause the problem.
    Say in ODI model, you have defined a 'Date type' field to be stored as 'String' however at the time of mapping in 'Interface', no conversion happens (from date to string)  for this particular object. This causes the problem.
    The original query remains valid at DB side (fires for datetype) however fails while integrating (anticipating String which may be longer than defined in modelbecause of your NLS setting in DB). Therefore the best way would be to apply conversion for that particular field (in this case, use TO_CHAR(Date type,'Desired Date Format').

  • ODI-40406: Bytes are too big for array

    I am getting the following error: java.lang.IllegalArgumentException: ODI-40406: Bytes are too big for array
    I am trying to a file to file mapping and all of my columns match in size. Does anyone have any ideas what this problem could be? I am getting an error when it tries to perform an integration step while trying to create the target table. I am assuming it is something wrong with one of my datastores
    Edited by: 897642 on Nov 16, 2011 5:19 PM

    There is a possibility of mismatch of datatype, that can cause the problem.
    Say in ODI model, you have defined a 'Date type' field to be stored as 'String' however at the time of mapping in 'Interface', no conversion happens (from date to string)  for this particular object. This causes the problem.
    The original query remains valid at DB side (fires for datetype) however fails while integrating. Therefore the best way would be to apply conversion for that particular field (in this case, use TO_CHAR(Date type,'Desired Date Format').

  • SQL server error log size is too big to handle

    I am working with a large database on windows sql server 2008 R2 such that it has to run continuously 24x7 because of that it is not possible
    to restart the server time to time. It is kind of monitoring system for big machines. Because of this SQL server error logs are growing too big even some times up to 60-70 GB at a limited sized hard drive. I can't delete them time to time manually. Can someone
    please suggest a way using which I can stop creation of such error logs or recycle them after sometime. Most of the errors are of this kind --
    Setting database option RECOVERY to simple for database db_name
    P.S.- I have read limiting error logs to 6 etc. But that didn't help. It will be best if you could suggest some method to disable these logs.

    Hi Mohit11,
    According to your description, your SQL Server error logs are growing too big to handle at a limited sized hard drive, and you want to know how to stop the generation of such error logs or recycle them after sometime automatically without restarting the
    SQL Server, right?
    As others mentioned above, we may not be able to disable SQL server error log generation. However we can recycle the error logs automatically by running the
    sp_cycle_errorlog on a fixed schedule (i.e. every two weeks) using SQL agent jobs so that the error logs will be recycled
    automatically without restarting SQL Server.
    And it is also very important for us to keep the error log files more readable. So we can increase the number of error logs a little more and run the sp_cycle_errorlog more frequently (i.e. daily), then each file will in a smaller size to be more readable
    and we can recycle the log files automatically.
    In addition, in order to avoid the size of all the log files growing into a too big size unexpected (sometime it may happen), we can run the following query in SQL Agent job to automatically delete all the old log files when the size of log files is larger
    than some value we want to keep (i.e. 30GB):
    --create a tample table to gather the information of error log files
    CREATE TABLE #ErrorLog
    Archieve INT,
    Dt DATETIME,
    FileSize INT
    GO
    INSERT INTO #ErrorLog
    EXEC xp_enumerrorlogs
    GO
    --delete all the old log files if the size of all the log files is larger than 30GB
    DECLARE @i int = 1;
    DECLARE @Log_number int;
    DECLARE @Log_Max_Size int = 30*1024; --here is the max size (M) of all the error log files we want to keep, change the value according to your requirement
    DECLARE @SQLSTR VARCHAR(1000);
    SET @Log_number = (SELECT COUNT(*) FROM #ErrorLog);
    IF (SELECT COUNT(FileSize/1024/1024) FROM #ErrorLog) >= @Log_Max_Size
    BEGIN
    WHILE @i <= @Log_number
    BEGIN
    SET @SQLSTR = 'DEL C:\Program Files\Microsoft SQL Server\MSSQL11.MSSQLSERVER\MSSQL\Log\ERRORLOG.' + CONVERT(VARCHAR,@i);
    EXEC xp_cmdshell @SQLSTR;
    SET @i =@i + 1;
    END
    END
    DROP TABLE #ErrorLog
    For more information about How to manage the SQL Server error log, please refer to the following article:
    http://support.microsoft.com/kb/2199578
    If you have any question, please feel free to let me know.
    Regards,
    Jerry Li

  • Numbers window too big

    For some reason everything I open a new numbers document session the spreadsheet is far to big for my screen and I need to reduce its size.
    Do you know why it is going this or how I can stop it?

    Unfortunately you still aren't being clear if it is a view issue or a table size issue.
    As the reply posts say, if the table is too big for your needs make it smaller and go to File>>save as newe template. Go to preferences and select the new template as the default.
    If the table has the right number of columns and rows but your screen isnt big enough use Yvan' solution.

  • DBIF_RSQL_INVALID_RSQL statement too big

    Hi All,
    We are on SAP R/3 4.6C (kernel 46D 64 bit pacth 2113 with last DBLS library, AIX 5.2, DB2 7.1 on z/os 1.4) and for a BW extraction we have the following dump:
    "ABAP runtime error DBIF_RSQL_INVALID_RSQL, RSQL error 13 occurred.
    The following message also appears in the developer trace:
    B *** ERROR => dbtran ERROR (set_input_da_spec): statement too big
    B marker count = 1195 > max. marker count = 726."
    For the OSS note 655018 the limit of marker count is 2000, but in the trace of work process we read 726.
    Many thanks for your help
    Bob

    Hi Bernhard,
    thak you for reply. This is the dump:
    ============================================================
    ABAP runtime errors    DBIF_RSQL_INVALID_RSQL
           Occurred on     16.01.2008 at 10:29:04
    RSQL error 13 when accessing table "EKKO ".
    What happened?
    The current ABAP/4 program "SAPDBERM " had to be terminated because
    one of the statements could not be executed.
    This is probably due to an error in the ABAP/4 program.
    What can you do?
    Note the actions and input that caused the error.
    Inform your SAP system administrator.
    You can print out this message by choosing "Print". Transaction ST22
    allows you to display and manage termination messages, including keeping
    them beyond their normal deletion date.
    Error analysis
    The SQL statement generated from SAP Open SQL violates a
    restriction imposed by the database system used in R/3.
    For details, refer to either the system log or the developer trace.
    Possible reasons for error:
    o Maximum size of an SQL statement exceeded.
    o The statement contains too many input variables.
    o The input data requires more space than is available.
    o ...
    You can usually find details in the system log (SM21) and in
    the developer trace of the relevant work process (ST11).
    In the event of an error, the developer trace often gives the
    current restrictions.
    How to correct the error
    The SAP Open SQL statement concerned must be divided into several
    smaller units.
    If the problem occurred because an excessively large table was used
    in an IN itab construct, you can use FOR ALL ENTRIES instead.
    When you use this addition, the statement is split into smaller units
    according to the restrictions of the database system used.
    System environment
    SAP Release.............. "46C"
    Application server....... "R3PRD"
    Network address.......... "172.24.10.50"
    Operating system......... "AIX"
    Release.................. "5.2"
    Hardware type............ "00C7A3EC4C00"
    Database server.......... "r3prddb_gb"
    Database type............ "DB2"
    Database name............ "PRD"
    Database owner........... "SAPR3"
    Character set............ "en_US.ISO8859-1"
    SAP kernel............... "46D"
    Created on............... "Aug 25 2005 20:51:50"
    Created in............... "AIX 1 5 00447C4A4C00"
    Database version......... " "
    Patch level.............. "2113"
    Patch text............... " "
    Supported environment....
    Database................. "DB2 for OS/390 6.1"
    SAP database version..... "46D"
    Operating system......... "AIX 1 4, AIX 2 4, AIX 3 4, AIX 1 5, AIX 2 5, AIX 3
    5, , System build information:, -
    , LCHN :
    775484"
    User, transaction...
    Client.............. 300
    User................ "ALEREMOTE"
    Language key........ "E"
    Transaction......... " "
    Program............. "SAPDBERM "
    Screen.............. "SAPMSSY0 1000"
    Screen line......... 6
    Information on where termination occurred
    The termination occurred in the ABAP/4 program "SAPDBERM " in
    "PUT_EKKO".
    The main program was "AQZZSYSTBWGENER0SY000000000361 ".
    The termination occurred in line 333
    of the source code of program "SAPDBERM " (when calling the editor 3330).
    The program "SAPDBERM " was started as a background job.
    Source code extract
    003030   ENDFORM.
    003040
    003050   ----
    003060   FORM PUT_EKKO.
    003070
    003080     CONSTANTS: LC_PACKAGE_SIZE LIKE SY-TFILL VALUE 10.
    003090     DATA: BEGIN OF LT_SEKKO OCCURS 0,
    003100             EBELN LIKE EKKO-EBELN,
    003110           END OF LT_SEKKO.
    003120     DATA: BEGIN OF LT_TEKKO OCCURS 0,
    003130             EBELN LIKE EKKO-EBELN,
    003140           END OF LT_TEKKO.
    003150     DATA: L_INDEX_FROM LIKE SY-TABIX,
    003160           L_INDEX_TO   LIKE SY-TABIX,
    003170           L_EKPO_TABIX LIKE SY-TABIX.
    003180     PERFORM KSCHL_SETZEN.
    003190     ONEST = P_ONEST.
    003200     TEST  = TESTLAUF.
    003210     DB_DEL  = ' '.    "VETVG und Komponenten bleiben noch stehen
    003220     DETPROT = P_PROT.
    003230     ANDAT = ER_ANDAT.
    003240     PERFORM AKTIVITAET_SETZEN(SAPFM06D) USING '06'.
    003250     SELECT EBELN    FROM  EKKO INTO TABLE LT_SEKKO
    003260       WHERE EBELN  IN ER_EBELN
    003270         AND EKORG  IN ER_EKORG
    003280         AND BSTYP  IN ER_BSTYP
    003290         AND BEDAT  IN ER_BEDAT
    003300         AND BSART  IN ER_BSART
    003310         AND EKGRP  IN ER_EKGRP
    003320   *   AND MEMORY NE 'X'
    >         ORDER BY EBELN.
    003340     L_INDEX_FROM = 1.
    003350
    003360     DO.
    003370       REFRESH LT_TEKKO.
    003380       L_INDEX_TO = L_INDEX_FROM + LC_PACKAGE_SIZE - 1.
    003390       LOOP AT LT_SEKKO FROM L_INDEX_FROM
    003400                        TO   L_INDEX_TO.
    003410         APPEND LT_SEKKO TO LT_TEKKO.
    003420       ENDLOOP.
    003430       IF SY-SUBRC NE 0.
    003440         EXIT.
    003450       ENDIF.
    003460       SELECT        * FROM  EKKO INTO TABLE XEKKO
    003470         FOR ALL ENTRIES IN LT_TEKKO
    003480              WHERE  EBELN  = LT_TEKKO-EBELN
    003490         ORDER BY PRIMARY KEY.
    003500       SELECT        * FROM  EKPO INTO TABLE XEKPO
    003510         FOR ALL ENTRIES IN LT_TEKKO
    003520              WHERE  EBELN  = LT_TEKKO-EBELN
    Contents of system fields
    SY field contents..................... SY field contents.....................
    SY-SUBRC 0                             SY-INDEX 0
    SY-TABIX 1                             SY-DBCNT 152
    SY-FDPOS 1                             SY-LSIND 0
    SY-PAGNO 0                             SY-LINNO 1
    SY-COLNO 1
    ============================================================
    We have also installed the last kernel and the DBSL 2342.
    Thank you and regards
    Bob

  • Field Length too big to display

    Hi
    We have a scenario where the wrong unit of measure was used and resulted in numbers that were too big to display on deliveries and reports eg 555, 000, 000, 000, 000.00.  We are now changing the UOM for the affected PNs but wondered if anybody has any other solutions on how SAP can cope with displaying fields that are too large for its defined length for all the old deliveries created previously.  Currently these hit run time display errors.  We have already extended the field length once but need a more robust solution in case there are any more materials like this.
    Thanks

    Extending the field length would not make the error going away. Instead it makes it easier for the user to enter wrong units and conversions. As SAP will less often dump, you will encounter your errors much later than now.
    analyse your materials and correct the wrong entries, conversions to alternative units are stored in table MARM

  • Cdhdr too big

    Hi
    A program is making use of the table cdhdr
    With where condition objectid AND udate. using a for all entries condition
    But since this table contain 28 million data in is when a time out occur because the search is too big and also we are not making use of the primary key criteria.  Please advise how this can be solve. to prevent time out

    Hi,
    2 solutions.
    1. Try to use all primary keys in where clause.(Even though it will be bit slow:-) )
    2. If u don't have any front end interactions like GUI_DOWNLOAD, GRID display etc then instead of running it in foreground run it as batch job. It creates the spool. U can take the print from spool later.
    I think second way is better.
    Thanks,
    Vinod.

  • Client Export Log too big

    Hi all,
    in the client export log I found many of these string for different tables:
    4 ETW000 AGR_HIER.X_POS: NUMC value changed from '     ' to '00000'
    and the log size is about 2,5 GB.
    What I have done to solve the problem??
    Thanks in advanced
    M.

    The size of client export is about 12,5GB.
    The problem is that in the log file of the export is too big because there are many string for different tables that contains this message:
    4 ETW000 CEFORMS.FLEVEL: NUMC value changed from ' ' to '0'
    4 ETW000 CEFORMS.NUMBR: NUMC value changed from '   ' to '000'
    4 ETW000 CEFORMS.FLEVEL: NUMC value changed from ' ' to '0'
    4 ETW000 CEFORMS.NUMBR: NUMC value changed from '   ' to '000'
    4 ETW000 CEFORMS.FLEVEL: NUMC value changed from ' ' to '0'
    4 ETW000 CEFORMS.NUMBR: NUMC value changed from '   ' to '000'
    4 ETW000 CEFORMS.FLEVEL: NUMC value changed from ' ' to '0'
    4 ETW000 CEFORMS.NUMBR: NUMC value changed from '   ' to '000'
    4 ETW000 CEFORMS.FLEVEL: NUMC value changed from ' ' to '0'
    4 ETW000 CEPRINT.COLOR: NUMC value changed from '  ' to '00'
    4 ETW000 CEPRINT.FROW: NUMC value changed from '    ' to '0000'
    4 ETW000 CEPRINT.SFTCT: NUMC value changed from '  ' to '00'
    4 ETW000 CEPRINT.COLOR: NUMC value changed from '  ' to '00'
    4 ETW000 CEPRINT.FROW: NUMC value changed from '    ' to '0000'
    4 ETW000 CEPRINT.SFTCT: NUMC value changed from '  ' to '00'
    4 ETW000 CEPRINT.COLOR: NUMC value changed from '  ' to '00'
    4 ETW000 CEPRINT.FROW: NUMC value changed from '    ' to '0000'
    4 ETW000 CEPRINT.SFTCT: NUMC value changed from '  ' to '00'
    4 ETW000 CEPRINT.COLOR: NUMC value changed from '  ' to '00'
    and if I don't delete the log file in /usr/sap/trans/log this file system will be full (This is the real problem)
    I don't understand why I find so many entry (NUMC change from ' ' to '00') for the same tables in the log file.
    Thanks
    M.

  • Offset declaration too big for structure (unicode checks)

    Has anyone worked with this user exit to Derive values for co/pa?
    Enhancement COPA0001
    Comp ZXKKEU11
    I set up WA_CE1AWG1 as data: WA_CE1AWG1 like CE1AWG1.
    and tried to copy I_COPA_ITEM to this variable
    WA_CE1AWG1 = I_COPA_ITEM.
    I get a runtime error Offset declaration too big for structure.
    will I have to change program SAPLXKKE to remove unicode checks. Is there any other way to read in the fields from input structure I_COPA_ITEM?

    hi
    1.r u sure I_COPA_ITEM is structure or table.
    2. if it is a struccture make sure the WA_CE1AWG1 and  I_COPA_ITEM are with same structures and types..
    i think I_COPA_ITEM is table type --> to find it open in debugging mode then try to resolve.
    Please Close this thread.. when u r problem is solved
    Reward if Helpful
    Regards
    Naresh Reddy K

Maybe you are looking for

  • Anybody adapt a iPhone case with or without Cover for a Z

    I recently bought a Zen. I wanted to try out a case before buying out, but haven't been able to find a case in a store. Last night I was looking at MicroCenter for a Zen case and I found an iphone case. I like it, but I wish it had a cover. http://ww

  • Dynamic where clause change

    Hi, I have code as below Scenario1 FOR i in (Select .................... from table1 Where id in(select id from Sample_table where status='Submit')) Loop End loop; Scenario2 FOR i in (Select .................... from table1 Where id in(select id from

  • Robohelp 6 asian pack

    Hello, A question. Are you currently making RoboHelp 6 for Japanese and Chinese languages? Or, do you consider making it in the near future? Thank you. victor

  • Inspection Result - Qualitative

    Hi all, I have a inspection lot with plan containing 3 lines. The first two result lines are quantitative and the third one is qualitative. Using characteristics and Single records, i am able to update quantitative records but the same is not working

  • 17 Hrs and still no activation, should I reset my iPhone?

    What do you guys think? I just read on another thread that if you were a verizon customer and entered a dash with your account number, this may be the issue. Do you think I should reset the activation process / phone? If so, how do I do this?