mercredi 31 décembre 2014

When try to do Mysqldump Backup from console then getting such as error


mysqldump: Got error: 2003: Can't connect to MySQL server on 'localhost' (10061) when trying to connect


when i am tring to do MySQLDump backup from console then getting such as above error. I am properly configure my MySQL Enterprise workbench and SQLyog ultimate tool. Able to do query and backup through that.


Kindly any one suggest me what i had mistake in system profile configuration of workbench. I have check out my connection parameter that is ok.


Do i need to configure anything anywhere? enter image description here





Store Repeated Data


I have a table (Key tinyint, Value bigint). There are a billion rows for each distinct Key value and only about 100 distinct Key values.


Is there a way to not have to store Key in the every row without breaking this up into multiple tables?





Is that create table statement correct?


Although i am not part of this community for long time, i already feel like you are my family, since thats what family does. We help each other.


I am thankful to everyone who helped me!



CREATE TABLE IF NOT EXISTS `Happy New Year 2015` (
`Hope` int(11) unsigned NOT NULL AUTO_INCREMENT,
`Your` varchar(30) NOT NULL,
`Wishes` real(11) NOT NULL,
`And` varchar(30) NOT NULL,
`Dreams` real(11) NOT NULL,
`Come` int(11) NOT NULL,
`True` boolean NOT NULL,
PRIMARY KEY (`Wishes And Dreams`)
) ENGINE=DByear DEFAULT CHARSET=happy7112 AUTO_INCREMENT=INFINITE ;`


Happy New Year.





Foreign key set to cascading but fails to update - why?


As mentioned elsewhere, I am rather new to SQL and databases. Getting my hands dirty at the moment and trying all the different possibilities.


I have a server running MySQL and set up the following table for testing:



Field Type Collation Attributes Null Default Extra
id mediumint(8) UNSIGNED No None AUTO_INCREMENT
domain varchar(255) utf8_unicode_ci No None
status enum('...') utf8_unicode_ci No None
replaced_by mediumint(8) UNSIGNED Yes NULL

The idea was to have a list of domains that are valid for emails such as "gmail.com" and "hotmail.com". Now if some domain is recognized as duplicate to another (such as "googlemail.com" to "gmail.com") the status column indicates this and the "replaced_by" column gives the ID to the main domain to be used.


Also, if a domain is recognized as invalid (such as "gymail.com" instead of "gmail.com") the database could store this information and, through status and replaced_by a script could suggest the correct domain.


While ID is the primary key of this table, I wanted to test a foreign key on the replaced_by column with ON DELETE RESTRICT and ON UPDATE CASCASE. so that the replaced_by column stays in-sync with the main ID.


But somehow when I try to update data, I get the following error. Why?



#1451 - Cannot delete or update a parent row: a foreign key constraint fails (`test`.`domains`, CONSTRAINT `domains_ibfk_1` FOREIGN KEY (`replaced_by`) REFERENCES `domains` (`id`) ON UPDATE CASCADE)




why does a space in a name in syslogins get treated like a wildcard?



USE [aDatabaseName]
GO

IF EXISTS (select * from sys.sysusers su join sys.syslogins sl
on sl.sid = su.sid where sl.name = N'XYZ\SQLBackup')

print ' dropping user from aDatabaseName'
DROP USER [XYZ\SQLBackup]

GO


The problem here is that the name in syslogins is actually XYZ\SQL Backup (note the space). Why would the select statement for XYZ\SQLBackup return a row for XYZ\SQL Backup?


When I run it it tells me that I can't drop user [XYZ\SQLBackup] because it doesn't exist. Which is certainly true.





mysql multi-source replication to single slave from Debian repo's?


I've been experimenting with mysql 5.7's feature to backup multiple sources to a single slave, but have legacy db's on source servers where I'm not sure if upgrading to 5.7 will break things, and also I want to use standard packages in Debian repositories to more standardized support. Does anyone have experience doing something like this, and/or is there a better way to approach it?





There was an unknown error applying the filestream settings. check the parameters are valid. (0x80070002)


I am getting the following error while trying to configure FileStream in SQL server 2012.


There was an unknown error applying the filestream settings. check the parameters are valid. (0x80070002)


OS: Windows 8.1 SQL Server 2012 x64.





Unable to connect to or create SQL Server database using asp.net


In my asp.net project I have SQL Server database with some tables. Unfortunately my PC was not properly shut down and and after that I am unable to connect to my database from Visual Studio (I am using version 2012): this is the error message:



The attempt to attach to the database failed with the following information: A network-related or instance-specific error occurred while establishing a connection to SQL Server. The server was not found or was not accessible. Verify that the instance name is correct and that SQL Server is configured to allow remote connections. (provider: SQL Network Interfaces, error: 50 - Local Database Runtime error occurred. Cannot create an automatic instance. See the Windows Application event log for error details.)



The log file in Microsoft SQL Server Local DB11\Instances\v11.0 suggested that master.mdf database is corrupted. After research I tried to restore it by using rebuildm.exe but my attempts were not successful. Originally this tool was not present on my PC so after downloading all .exe, .dll, and .rll files. I am not sure if it even works properly.


I tried also Windows system restore but it was unable to be performed (some issues with my antivirus program)


I tried reinstalling all Microsoft SQL Server features available in Windows Programs and Features and VS too but this error is still present. My project dead line is getting closer and I lost all day trying to fix this problem. I have little knowledge about VS or SQL Server. Any suggestions will be highly appreciated!





Avoiding multiple `or` statements


I have the following oracle SQL and its works and all but it's quite ugly with all of the ors. Is there a more concise way of doing this?



SELECT * FROM foobar WHERE
(t1.SUBJECT ='STAT' and t1.TERM ='111') or
( t1.SUBJECT ='STAT' and t1.TERM ='222') or
( t1.SUBJECT ='ENGLISH' and t1.TERM ='555') or
( t1.SUBJECT ='COMM' and t1.TERM ='444') or
( t1.SUBJECT ='COMM' and t1.TERM ='333') or
( t1.SUBJECT ='STAT' and t1.TERM ='666')
...




Duplicate table structure with different data


I am looking for a way to resolve a design issue with a Microsoft Access project that is being converted to use SQL Server 2014 as a backend.


We have several Access backend files, two of them have the same table structure, but different data. In the "Quotes" MDB backend file, we have tables containing sales quotes. In the "Jobs" MDB backend file, we have tables containing jobs (quotes that have been sold), but also the same sales quotes tables are duplicated. When a quote is sold, it's copied from one backend to the other. Which means the data exists in two places and is not kept in sync.


When I originally started upsizing tables from the MDB backends, i didn't think about this issue but now it's becoming a problem that needs to be resolved. We don't know how the structure should be fixed. It's my understanding that the combined set of all the tables in the various MDB backends for this software, should be in one database in SQL Server. The only reason for splitting them up as they are now is to overcome limitations of the Access MDB format.


To complicate matters further, the tables have exactly the same names. The Access front-end application switches between backends on-the-fly, when the user changes between quoting or working with sold orders.





Index rebuild log growth


How much log index rebuild will generate.I remebered reading rebuild index should generate same amount of log file as table size.But my tests are showing otherwise.We need this estimate since we are trying to build azure database indexes and it has a limitation of max 2 GB.


My database is in full recovery model.


Table Size:


enter image description here


log size:


enter image description here


from the pictures you can see log generation was very less with online index rebuild operation.Can some one correct me if i am missing anything





Determine whether there is any blocking while deleting large number of records


I would like to know whether there is any way to determine why it is taking too much time say 3 minutes for deleting 50K records from a database hosted in Sql Azure. I have clustered index on primary key and a non-clustered index on ID which is being used in where condition in Delete Query



DECLARE @RowsDeleted INTEGER
SET @RowsDeleted = 1

WHILE (@RowsDeleted > 0)
BEGIN
DELETE top(100) FROM table WHERE ScenarioID= @ID
SET @RowsDeleted = @@ROWCOUNT
END


Is there any way to know whether there are any locks on this table


indexes:



ALTER TABLE [dbo].[ActCost] ADD CONSTRAINT [PK_ActCost] PRIMARY KEY CLUSTERED
(
[ActCostID] ASC
)WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, SORT_IN_TEMPDB = OFF, IGNORE_DUP_KEY = OFF, ONLINE = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON)
GO



Alter NONCLUSTERED INDEX [IX_ActCost_ScenarioID] ON [dbo].[ActCost] ([ScenarioID] ASC)
INCLUDE ( <list_all_other_columns_returned> );
GO


CREATE TABLE [dbo].[ActCost](
[ActCostID] [int] IDENTITY(1,1) NOT NULL,
[ActID] [int] NOT NULL,
[ActCostTypeID] [int] NOT NULL,
[Description] [nvarchar](200) NOT NULL,
[Cost] [float] NOT NULL,
[CostPerProductUnit] [float] NOT NULL,
[CostPerEndProductUnit] [float] NOT NULL,
[OtherValue] [float] NULL,
[OtherID] [int] NULL,
[Comment1] [nvarchar](200) NULL,
[Comment2] [nvarchar](200) NULL,
[OPerProductUnit] [float] NULL,
[OPerHour] [float] NULL,
[OCostPerUnit] [float] NULL,
[OCostPerHour] [float] NULL,
[PerfEnh_ProcessID] [int] NOT NULL,
[PerfEnh_MillID] [int] NOT NULL,
[ScenarioID] [int] NOT NULL,
CONSTRAINT [PK_ActCost] PRIMARY KEY CLUSTERED
(
[ActCostID] ASC
)WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON)
)

GO


Azure Space : Available 2 GB Used :500MB


I have another database with same data but with more Available Size which works fine i mean it takes just 20-30 seconds to do the same job , Is it going to be a factor ?


enter image description here


Blocking inspection : enter image description here





View last several innodb deadlocks


I see that I can view the latest deadlock in mysql/innodb, but is there a way to view past deadlocks? We have two deadlock issues, one being important and the other not. The less important deadlock happens several times a day so it becomes the "latest" deadlock.





Separate login for user / administrators using mongodb


I have 2 separate groups of users on my website. One is user and the other one is admin. Currently I put them in 2 different collections user and admin and login respectively. My questions is:




  1. Mongodb generates _id automatically for both user and the admin collection. Since they're in 2 different collections chances are one day you created one user and one admin with the same _id right? So it seems the only way is to check all documents in both dbs and make sure your newly created user (or admin) does not have repeated _id right? Is there something I can do natively in mongodb to make this happen without checking?




  2. I can also put all users and admins in one collection. This way I don't have a problem but I am not sure if I should do this.




p.s: because I have a table keeping track of user / admin balances so I certainly don't want the _id to repeat p.s2: the db is taking longer and longer to query. Also my ORM creates user instances and generated _id automatically for me. I am not sure if I should just generate an _id and create it this way...


Any comments are greatly appreciated. Thanks!!





mardi 30 décembre 2014

What is meant by churn?


I came across the following sentence in this blog post:



Know your data - this helps you make the right decisions in terms of data types/nullability and churn helps with long term maintenance goals (and initial maintenance plans)



I can't figure out what is meant by "churn." What is this? I only find plenty of articles talking about "churn" without saying what it is.





How to update commands in Ola Hallengren's SQL Server jobs?


I am looking for an easy way to update the jobs that get created from Ola's MaintenanceSolution.sql script without having to completely delete and re-create them or having to modify each job individually by going into the job's properties, then into the step and modifying the command.


Specifically, I'm trying to update the @CleanupTime argument, which is used to tell the job when to delete old backups. According to his FAQ, whenever a newer version of his script is available and you want to update to that newer version, you just need to download the MaintenanceSolution.sql script and run it again, with the option of specifying SET @CreateJobs = 'N' if you don't want to create the jobs. That tells me that if you leave @CreateJobs = 'Y', it would create the jobs anew, but this doesn't seem to be the case.


For example, if the "DatabaseBackup - USER_DATABASES - FULL" job already exists, with the command in the step showing @CleanupTime = 48, then I re-run the MaintenanceSolution.sql script with @CleanupTime = 168, and then go back into the job's step, it still shows @CleanupTime = 48.





Is it worth indexing a varchar(32) column of UUIDS where half the UUIDS are v1 and half are v4?


I have a MySQL InnoDB table with tens of millions of rows. Each row has uuid column. They are stored in the standard VARBINARY(16). I was recently asked to add a column to this table of VARCHAR(32) that stores the non-dashed UUID hex.


While doing this, I discovered that about half our UUIDs are v1, and the other half are v4. This means that for the first half, since they were all generated on a single machine, the randomness is a bit lacking (only the left-bytes timestamps are different). On the other hand, the newer half are basically completely random.


Is it worth putting on index on this column? If so, I'm struggling to decide how large that index should be (or maybe even what type of index).





SQL Server 2008 R2 native replication with Column Level Collation


In my SQL Server 2008 R2 environment -


Created a 'source' and 'target' database and altered the database collation as follows -



use master
go
ALTER DATABASE source
COLLATE Chinese_PRC_CI_AS
GO

ALTER DATABASE target
COLLATE Chinese_PRC_CI_AS
GO


I created the below table in my source and target databases.



create table dbo.tcolcs1 (
c0 int not null primary key,

c1 varchar(30) collate SQL_Latin1_General_Cp1_CI_AS ,
c2 char(30) collate SQL_Latin1_General_Cp1_CI_AS ,
c3 text collate SQL_Latin1_General_Cp1_CI_AS ,
c4 nvarchar(30) collate SQL_Latin1_General_Cp1_CI_AS ,
c5 nchar(30) collate SQL_Latin1_General_Cp1_CI_AS ,
c6 ntext collate SQL_Latin1_General_Cp1_CI_AS ,

cA varchar(30) collate Chinese_PRC_CI_AS,
cB char(30) collate Chinese_PRC_CI_AS ,
cC text collate Chinese_PRC_CI_AS,
cD nvarchar(30) collate Chinese_PRC_CI_AS ,
cE nchar(30) collate Chinese_PRC_CI_AS,
cF ntext collate Chinese_PRC_CI_AS
)
go


I have inserted the below record in my source database.



insert into dbo.tcolcs1 values (10, nchar(0x00c4),nchar(0x00c4),'A',nchar(0x00c4), 'B', nchar(0x00c4),NCHAR(0x4EBC),NCHAR(0x4EBC),NCHAR(0x4EBC),NCHAR(0x4EBC),NCHAR(0x4EBC),NCHAR(0x4EBC))


Just for information, character '0x00c4' is available in Latin1 code page but does not exist in Chinese_PRC_CI_AS code page.


Once the SQL Server replication does its job, I found the below record in target database.



c0 c1 c2 c3 c4 c5 c6 cA cB cC cD cE cF
10 ? ? A Ä B Ä 亼 亼 亼 亼 亼 亼


Now, the problem is that SQL Server is not replicating the column C1 and C2 correctly. Both columns are having latin1 collation and character '0x00c4' exists in Latin code page.


Is this a limitation of SQL Server or any specific settings need to be enabled ?





What is the most efficient method of performing the FIZZBUZZ test in SQL Server?


This may not be completely on-topic, however it's a slow day here.


Is there a more efficient method of obtaining a list of numbers from 1 to 49 with a column containing the words FIZZ when the number can be evenly divided by 3, BUZZ when the number can be evenly divided by 5, and FIZZBUZZ when the number can be evenly divided by both 3 and 5?


My attempts are:



DECLARE @Start DATETIME2(7);
DECLARE @End DATETIME2(7);
SET @Start = SYSDATETIME();
;WITH t AS (
SELECT RowNum = ROW_NUMBER() OVER (ORDER BY o.object_id)
FROM sys.objects o
)
SELECT t.RowNum
, CASE WHEN ((t.RowNum % 3) + (t.RowNum % 5)) = 0 THEN 'FIZZBUZZ'
ELSE
CASE WHEN t.RowNum % 3 = 0 THEN 'FIZZ'
ELSE
CASE WHEN t.RowNum % 5 = 0 THEN 'BUZZ'
ELSE ''
END
END
END
FROM t
WHERE t.RowNum < 50;
SET @End = SYSDATETIME();
SELECT DATEDIFF(MICROSECOND, @Start, @End);

GO

DECLARE @Start DATETIME2(7);
DECLARE @End DATETIME2(7);
SET @Start = SYSDATETIME();
DECLARE @t TABLE
(
Num INT NOT NULL PRIMARY KEY CLUSTERED
);
INSERT INTO @t (Num)
SELECT ROW_NUMBER() OVER (ORDER BY o.object_id)
FROM sys.objects o;

SELECT t.Num
, CASE WHEN ((t.Num % 3) + (t.Num % 5)) = 0 THEN 'FIZZBUZZ'
ELSE
CASE WHEN t.Num % 3 = 0 THEN 'FIZZ'
ELSE
CASE WHEN t.Num % 5 = 0 THEN 'BUZZ'
ELSE ''
END
END
END
FROM @t t
WHERE t.Num < 50;
SET @End = SYSDATETIME();
SELECT DATEDIFF(MICROSECOND, @Start, @End);


I'm using SQL Server 2012, so I have the luxury of using DATETIME2(7) as a fairly high precision timer to evaluate how long each process takes. They both take about the same amount of time most of the time, which is to say around 1 millisecond on my workstation. The execution plan for the second version has a total "estimated subtree cost" far higher than the first version.





Query profiling shows "Waiting for query cache lock" but query_cache_size is 0


We have a statement based replication server that has been experiencing slowdowns and during the event SHOW FULL PROCESSLIST showed the replication query stuck on "Waiting for query cache lock" which was surprising because the server query_cache_size is set to 0. Profiling the offending query does show this step in every update to the table.


Is this typical to see in the profile for an update even if the query_cache_size is 0? Is this really just the check for query caching and not the query waiting to obtain a true lock?





Building year to date aggregations using SQL


I need some help with building a monthly aggregation using only SQL.


Imagine the following table:



TranID DateCode Account Value
1 20140101 1 5
2 20140106 1 -3
3 20140207 1 6
4 20140409 1 3
5 20140103 2 3
6 20140215 2 7
7 20140519 2 6


There are two accounts that have transactions on various dates. I would like to write code that gives me this result assuming we are in July:



MonthID Account YTD
20140101 1 2
20140201 1 8
20140301 1 8
20140401 1 11
20140501 1 11
20140601 1 11
20140701 1 11
20140101 2 3
20140201 2 10
20140301 2 10
20140401 2 10
20140501 2 16
20140601 2 16
20140701 2 16


I'm thinking I should be able convert the dates to MonthCode using



DATEADD(day, -DAY(DateCode) + 1, DateCode) AS MonthCode


I'm thinking I should be able to solve this somehow by joining the table to itself but I don't really get the numbers right. Also I need to somehow get the months without transactions in as well..


Any and all help will be greatly appreciated!


Code to generate the mock data:



Create table #Tran(
TranID int identity(1,1),
DateCode Date,
AccountCode varchar(50),
Value int
);

insert into #Tran (DateCode, AccountCode, Value)
values ('20140101', '1', 5),
('20140106', '1', -3),
('20140201', '1', 6),
('20140401', '1', 3),
('20140101', '2', 3),
('20140201', '2', 7),
('20140501', '2', 6);




Will a non-clustered index on a column with covering index on all the columns of table affect the Delete statement Records)?


I am trying to delete thousands of records say 35K from a table which consists like 10 million records.



Delete * from Table where ScenarioID=859


I have also tried bulk deletion



DECLARE @RowsDeleted INTEGER
SET @RowsDeleted = 1

WHILE (@RowsDeleted > 0)
BEGIN
DELETE top(100) FROM tableWHERE ScenarioID= @ID
SET @RowsDeleted = @@ROWCOUNT
END


I would like to ask



  1. Generally How much time it would take to delete such no of records?

  2. If there is any Index on that table then is it gonna affect the delete statement?

    Note: I have a non-clustered index on a column ID. Does this have any affect on performance of if i delete records based on that ID

  3. are there any common bottle-necks that may cause delete to take long time than it should take ?


Is there any thing going wrong in this plan ?


enter image description here


PS: My database is hosted in Azure and it takes just below 10 seconds to execute the same command in Azure portal





Does changing a Postgresql table's tablespace compact it like "vacuum full" or "cluster"?


I need to reclaim disk space in a table in a Postgresql 9.3.2 database to be used for other tables. The standard suggestions to accomplish this are: "VACUUM FULL" or "CLUSTER" or copy the contents into a flat file and copy them back in. When moving an existing table to a new tablespace with an "ALTER TABLE" command, the table's contents are copied. Can anyone tell me if they are compacted during the "ALTER TABLE" process?





dataware house measures without a corresponding dimension


I have an operating system that contains this table



salesQuality salesPrice salesCategory


according to my business requirements, the salesQality and salesPrice are measures,


however, salesCategory is a dimension.


but, unfortunately, in the operation database, i do have some raws without the salesCategoryvalue.


my question is what should I do in the fact table to represent these sales that don't have a category value in the category dimension?


Thanks





How I can create user defined roles


I have a database in mongoDB 2.6.4 in which I want to restrict the users as per their level. I want to give read privileges to some users and insert/update privileges to some other users on a particular database. Please let me know how I can create user defined roles and assign them to the users. I tried the below but it is not working; the users still have full privileges:



db.runCommand({createRole: "read",

privileges: [

{ resource: { db: "sample", collection: "" },
actions: [ "find" ]
}],
roles: [],

writeConcern: { w: "majority" , wtimeout: 5000 }

})

==================================================================================

db.addUser( { user : "read",
pwd : "read",
roles:["read"]
} )




MySQL replication slave of slave not receiving updates


I have 4 servers: A, B, C and D.


A and B are in a master-master setup.


C is a slave to B and D is a slave to C.


Changes on A or B get to all nodes other than D. I need them to get to D


Changes on C get to D but obviously they don't go to A or B which is fine.


After some reading I found that I needed the log-slave-updates option set on C, this way C would log the changes replicated to it and then D could read those and make the same updates.


I have set the option to on and D is waiting for it's master to send events but changes from A or B are still not getting to D.





Check any changes done to database


I want to check SQL server database is modified or not. Reason to check database is modified is for backup(sql database backup) purpose. So I will be doing a full backup first and if any data inserted, modified or any structural or any changes done then I have to identify that there is some changes in databases and backup the database.The checking for changes is done based on frequency(daily, hourly etc). Modified date of the mdf or ldf file can't be used to check database is modified or not. Differential backup only backups changes since last backup but it's different and it does a backup even if there is no changes since last backup but size will be very less(in kb) .So how can I check whether any changes done to database after my last backup?





Understanding multiple table join with aggregation


I have a basic question on how JOIN works on multiple tables. Consider the tables below where I want to count the occurrences of Foreign Key in link1 and link2.



CREATE TABLE main (
id SERIAL PRIMARY KEY,
name text NOT NULL
);

CREATE TABLE link1 (
id SERIAL PRIMARY KEY,
main_id integer NOT NULL,
CONSTRAINT main_id_fk FOREIGN KEY (main_id) REFERENCES main (id)
);

CREATE TABLE link2 (
id SERIAL PRIMARY KEY,
main_id integer NOT NULL,
CONSTRAINT main_id_fk FOREIGN KEY (main_id) REFERENCES main (id)
);

INSERT INTO main (name) VALUES ('a'), ('b'), ('c'), ('d');
INSERT INTO link1 (main_id) VALUES (1), (1), (2);
INSERT INTO link2 (main_id) VALUES (2), (2), (3);


SQL Fiddle


If it was just 1 table, I'd write:



SELECT main.id, COUNT(link1.main_id)
FROM main
LEFT JOIN link1 ON main.id=link1.main_id
GROUP BY main.id


With 2 tables, I extend it this way. This gives wrong results but I can't quite explain the inner working of multiple table join. I expected this to join link1 & link2 to main and add the occurrences of FK main_id in each. But it looks like the query is multiplying the results of individual counts.



SELECT main.id, COUNT(link1.main_id) + COUNT(link2.main_id)
FROM main
LEFT JOIN link1 ON main.id=link1.main_id
LEFT JOIN link2 ON main.id=link2.main_id
GROUP BY main.id


P.S. I know how I can achieve the right answer, but I want to understand what's wrong with the above approach.





Can not run PostgreSQL after drop table


I've deleted database from PostgreSQL and after this I can do nothing. Here what i receive with createuser



$ createuser -D new_user
createuser: could not connect to database postgres: FATAL: role "postgres" does not exist


And this when try to run psql



$ psql
psql: FATAL: role "postgres" does not exist


I've tried above with root, postgres, custom users. Still same output


Also I searched for pg_hba.conf like this find / -name 'pg_hba.conf' and got this



/var/lib/postgres-xc/DN2/pg_hba.conf
/var/lib/postgres-xc/DN1/pg_hba.conf
/var/lib/postgres-xc/CN/pg_hba.conf
/etc/postgres-xc/coordinator/CN/pg_hba.conf
/etc/postgres-xc/datanode/DN2/pg_hba.conf
/etc/postgres-xc/datanode/DN1/pg_hba.conf
/etc/postgresql/9.3/main/pg_hba.conf


I've changed connection method to trustfor localhost and IPv4 in /etc/postgresql/9.3/main/pg_hba.conf.


What should I do get rid of this error?


Does connection method in other files above should be edited too?





Login failed for user 'x' in SQL Server


We received an error "Login failed for user 'x'" " sometime from application. We are 100 % sure as it working when we checked by manual login from SSMS as well as from application. It happened sometime only. How can we found exact reason for it's failure? why it failed sometime? Below is a whole stack trace.


Exception Type : System.Data.SqlClient.SqlException


Message : Login failed for user 'x'.


Source : .Net SqlClient Data Provider


TargetSite : Void OnError(System.Data.SqlClient.SqlException, Boolean)


StackTrace :


at System.Data.SqlClient.SqlInternalConnection.OnError(SqlException exception, Boolean breakConnection) at System.Data.SqlClient.TdsParser.ThrowExceptionAndWarning(TdsParserStateObject stateObj) at System.Data.SqlClient.TdsParser.Run(RunBehavior runBehavior, SqlCommand cmdHandler, SqlDataReader dataStream, BulkCopySimpleResultSet bulkCopyHandler, TdsParserStateObject stateObj) at System.Data.SqlClient.SqlInternalConnectionTds.CompleteLogin(Boolean enlistOK) at System.Data.SqlClient.SqlInternalConnectionTds.AttemptOneLogin(ServerInfo serverInfo, String newPassword, Boolean ignoreSniOpenTimeout, Int64 timerExpire, SqlConnection owningObject, Boolean withFailover) at System.Data.SqlClient.SqlInternalConnectionTds.LoginWithFailover(Boolean useFailoverHost, String primaryHost, String failoverHost, String newPassword, Boolean redirectedUserInstance, SqlConnection owningObject, SqlConnectionString connectionOptions, Int64 timerStart) at System.Data.SqlClient.SqlInternalConnectionTds.OpenLoginEnlist(SqlConnection owningObject, SqlConnectionString connectionOptions, String newPassword, Boolean redirectedUserInstance) at System.Data.SqlClient.SqlInternalConnectionTds..ctor(DbConnectionPoolIdentity identity, SqlConnectionString connectionOptions, Object providerInfo, String newPassword, SqlConnection owningObject, Boolean redirectedUserInstance) at System.Data.SqlClient.SqlConnectionFactory.CreateConnection(DbConnectionOptions options, Object poolGroupProviderInfo, DbConnectionPool pool, DbConnection owningConnection) at System.Data.ProviderBase.DbConnectionFactory.CreatePooledConnection(DbConnection owningConnection, DbConnectionPool pool, DbConnectionOptions options) at System.Data.ProviderBase.DbConnectionPool.CreateObject(DbConnection owningObject) at System.Data.ProviderBase.DbConnectionPool.UserCreateRequest(DbConnection owningObject) at System.Data.ProviderBase.DbConnectionPool.GetConnection(DbConnection owningObject) at System.Data.ProviderBase.DbConnectionFactory.GetConnection(DbConnection owningConnection) at System.Data.ProviderBase.DbConnectionClosed.OpenConnection(DbConnection outerConnection, DbConnectionFactory connectionFactory) at System.Data.SqlClient.SqlConnection.Open() at System.Data.Common.DbDataAdapter.QuietOpen(IDbConnection connection, ConnectionState& originalState) at System.Data.Common.DbDataAdapter.FillInternal(DataSet dataset, DataTable[] datatables, Int32 startRecord, Int32 maxRecords, String srcTable, IDbCommand command, CommandBehavior behavior) at System.Data.Common.DbDataAdapter.Fill(DataSet dataSet, Int32 startRecord, Int32 maxRecords, String srcTable, IDbCommand command, CommandBehavior behavior) at System.Data.Common.DbDataAdapter.Fill(DataSet dataSet) at Microsoft.Practices.EnterpriseLibrary.Data.Database.DoLoadDataSet(DbCommand command, DataSet dataSet, String[] tableNames) at Microsoft.Practices.EnterpriseLibrary.Data.Database.LoadDataSet(DbCommand command, DataSet dataSet, String[] tableNames) at Microsoft.Practices.EnterpriseLibrary.Data.Database.LoadDataSet(DbCommand command, DataSet dataSet, String tableName) at Microsoft.Practices.EnterpriseLibrary.Data.Database.ExecuteDataSet(DbCommand command) at DL.EIN.Exchange.DL.NotesByVisitID(Int32 UserID, Int32 VendorID, Int32 VisitID) in D:\Projects\SVN Code\ENT\36.07\ENT05053607RC\ExchangeDL\Patient\DL.TblPatientMaster.vb:line 4674





sync 2 tables in a single database in mysql with no duplicacy


I have 2 tables in a single database of MySQL.


The structures are like this:


mysql> desc main_tbl; +-----------------+--------------+------+-----+-------------------+----------------+ | Field | Type | Null | Key | Default | Extra | +-----------------+--------------+------+-----+-------------------+----------------+ | mid | int(11) | NO | PRI | NULL | auto_increment | | acc | varchar(255) | NO | MUL | NULL | | | device | varchar(255) | NO | | NULL | | | start | datetime | NO | | NULL | | | end | datetime | NO | | NULL | | | login_user | varchar(255) | NO | | NULL | | | date_added | datetime | NO | | CURRENT_TIMESTAMP | | +-----------------+--------------+------+-----+-------------------+----------------+


mysql> desc report_backup; +--------------+--------------+------+-----+-----------+----------------+ | Field | Type | Null | Key | Default | Extra | +--------------+--------------+------+-----+-----------+----------------+ | rid | int(11) | NO | PRI | NULL | auto_increment | | mid | int(11) | NO | UNI | NULL | | | times | varchar(255) | NO | | 2 minutes | | | account | varchar(255) | NO | | NULL | | | dev | varchar(255) | NO | | NULL | | | start | datetime | NO | | NULL | | | end | datetime | NO | | NULL | | | logged_user | varchar(255) | NO | | NULL | | | added_time | datetime | NO | | NULL | | +--------------+--------------+------+-----+-----------+----------------+


The first table is populated with user input, where records can be added,deleted.


The second table is used for backup where whenever a record is inserted in the 1st one, it will be automatically copied.


So I used a query like this to run everytime any record is inserted in the 1st table;


INSERT INTO report_backup(mid, account, dev, start, end, logged_user, added_time) SELECT mid, acc, device, start, end, login_user, date_added from main_tbl;


But this seems not working due to the mid is unique, which I made to avoid duplicate entry.


Any suggested solution will be appreciated.


Thanks





When try to do Mysqldump Backup from console then getting such as error


mysqldump: Got error: 2003: Can't connect to MySQL server on 'localhost' (10061) when trying to connect


when i am tring to do MySQLDump backup from console then getting such as above error. I am properly configure my MySQL Enterprise workbench and SQLyog ultimate tool. Able to do query and backup through that.


Kindly any one suggest me what i had mistake in system profile configuration of workbench. I have check out my connection parameter that is ok.





SQL aggregate function performance in a table with many records that is well-indexed


My question is about 3 tables that stores data of a single entity, and there is a report generation feature that refers these tables and calculate COUNT and AVG information.


There are three tables: review, rating, and rating_values.


Review table



CREATE TABLE IF NOT EXISTS `review` (
`review_id` int(10) unsigned NOT NULL AUTO_INCREMENT COMMENT 'Unique review ID of this review.',
`nid` int(10) unsigned NOT NULL,
`sub_target` int(10) unsigned NOT NULL,
`timestamp` int(10) unsigned NOT NULL DEFAULT '0' COMMENT,
`score` float DEFAULT NULL COMMENT,
PRIMARY KEY (`review_id`),
KEY `nid` (`nid`),
KEY `uid` (`uid`),
KEY `nid_uid` (`nid`,`uid`),
KEY `score` (`score`),
KEY `timestamp` (`timestamp`),
);


Review table acts as the the primary table. Each review can have 0 or more (we have a limit of 18 at the moment in application level) rating values. A rating value is something like "Food", "Location", etc for a restaurant review.


Rating table



'rating' (
`review_id` int(10) unsigned NOT NULL, //foreign key to {review}.review_id
`type` varchar(255) NOT NULL,
`score` float DEFAULT NULL,
KEY `review_id` (`review_id`),
KEY `score` (`score`)
);


In this table, type is a machine name of the "food", "location", etc I mentioned above. There is a foreign key to the base table that stores these rating type information. In our application, each rating can have multiple rating values that count towards the main rating. Say, there is "food", and there can multiple sub questions such as "food - appearance", "food - taste". These values can be 0 or more. If there are more than 1 sub question, this score field contains the average of the sub questions (please see the table below).


Rating value table



'rating_value' (
`review_id` int(10) unsigned NOT NULL,// foreign key to the {review}.review_id
`type` varchar(255) NOT NULL,
`sid` int(10) unsigned DEFAULT NULL,
`score` int(10) unsigned DEFAULT NULL,
KEY `review_id` (`review_id`),
KEY `type` (`type`)
)


This is where I expect most of the data would go into. Each sub question has an sid that is referred here.




A typical review would add rows like this:



Review:

review_id | nid | sub_target | timestamp | score
1 | 583 | 786| 1388167200| 4.32

Rating (please note how the two floats add up to 4.32 above):

review_id|type |score
1 |food |2.10
1 |location |2.22

Rating value:

review_id|type |sid|score
1 |food |1 |2.00
1 |food |1 |2.30
1 |location |2 |2.12
1 |location |3 |2.22
1 |location |3 |2.22



  • We will be generating reports with COUNT and AVG on rating_value tables. However, there will be always a JOIN with review table, and the filter would be review.nid, which is indexed.

  • When accessing reviews, application will almost every time use the review_id, which are either primary or foreign keys in all tables.

  • Sometimes we need to CAST the score field to nearest integer.


Admittedly I'm not an expert in databases, but I spent a lot of time normalizing the above, and I ended up with the above database after all three steps. I'm eager to learn, so if you have suggestions to above table structure, please don't hesitate to mention them. Note that all review and rating tables will contain some extra data that I didn't mention here. the rating table has a text field to allow users to write a text review. That's why rating values and rating are separated.


The reports we generate are never going to run on all rows. Just a small set of rows filtered by the nid (plus additional filters).


My questions are:




  • Is it common to spread data of a single user operation across multiple rows? I expect the rating_value table to grow almost 10 times (in record count) than the review table.




  • While (inner) JOINing tables, and calculating averages, can database engines filter down the eligible rows using the indexes first?




  • If you have any thoughts about the above schema, please don't hesitate to comment on.







Determine whether there is any blocking while deleting large number of records


I would like to know whether there is any way to determine why it is taking too much time say 3 minutes for deleting 50K records from a database hosted in Sql Azure. I have clustered index on primary key and a non-clustered index on ID which is being used in where condition in Delete Query



DECLARE @RowsDeleted INTEGER
SET @RowsDeleted = 1

WHILE (@RowsDeleted > 0)
BEGIN
DELETE top(100) FROM table WHERE ScenarioID= @ID
SET @RowsDeleted = @@ROWCOUNT
END


Is there any way to know whether there are any locks on this table





How to calculate a total by comparing current and next row


I have a table with some rows: enter image description here


I have a few questions to accomplish the task altogether. Is it possible to calculate total of transmitted and received based on checked_date and ip and connect_time using mysql query? If yes, can I use logical operators to calculate total of transmitted and received? To get correct total transmitted and received I need to use a formula like:



if connect_time[i] > connect_time[j]
then total += transmitted[i]+transmitted[j]
else total += ABS(transmitted[i]-transmitted[j])
end


How to get it done with this formula, does anyone have an idea? P.S. i = current row and j = next row.





Compare two columns for difference mark the difference in mysql


I need to compare two fields and mark the similar fields in third table.


For Example:


Tables


customer: phone id, phonenumber followstatus


received_Customer: rc_id, rc_phonenumber.


I need to mark the fdnumber with f or n/a in follow up which is in received and customer table.


update customer select b.phonenumber from received_customer n join customer b on b.phonenumber = n.phonenumber SET followUpStatus= 'y' ;


I run query in php.





Rollback on \include error in psql


I'm trying to execute an sql script file in a single transation. The file contains includes for some other scripts which in my example create some tables. It looks like this:



\include ../tables/table1.cre
\include ../tables/table2.cre
...
\include ../tables/table10.cre


I'm executing it using psql:



psql -X --set AUTOCOMMIT=off --set ON_ERROR_STOP=on -e --single-transaction -d my_db -f my_script.sql


The problem is that errors with the include meta command do not cause a transactiopn rollback. e.g. if some of tableX.cre files is missing, any changes before its include will be commited. However, if there's some SQL syntax error, everyting works as expected.


Is it possible to somehow handle include related errors and rollback the active transcation?


I'm using PostgreSQL 9.2.1.





Is it ever a bad idea to rebuild indexes?


We have a customer database that is heavily fragmented - practically every table with more than 1000 pages has >95% fragmentation. Fill factors are set to sensible values, but page space usage is nowhere near to fill factor for most tables.


This is the result of no maintenance being performed on the database.


Rebuilding the indexes using Ola Hallengren's IndexOptimize reduces fragmentation as expected. On the existing production hardware, performance of the application improves as expected. All the metrics I normally use - client statistics on heavy queries, profiler durations, read/write stalls, application logs and user perception - indicate performance is improved.


However, a new database server backed with Intel PCIe SSDs is showing the opposite of what we expect. Highly fragmented, the application performs well. After rebuilding indexes, the application performs badly. Some operations that took ~90s now take ~6mins. However, none of the other metrics appear to indicate that the system is going slower.


Is this something anyone else has experienced?





Reason for bigger or smaller number of rows than expected with mysqlslap


I did a simple insert operation with mysqlslap as follow,



mysqlslap --create-schema=test_crypt -u root -pmysql --delimiter=";" --query="INSERT INTO reference VALUES ('Jean', 1)" --concurrency=20 --iterations=1000


after this I remarked that the number of rows in the table, which I thought should be equal to the number of iterations times the number of concurrent clients, was either a bit larger or smaller than 20000. Is there a reason for that? Is it linked to mysqlslap or to the INSERT operation itself?


Thank you





How can i automate the process of log shipping monitoring


I have log shipping configured between my Prod SQL server 2005 and QA SQL server 2005. Per the requirement from Business we need to provide them with a report at the end of the day for the LOG shipping configured for multiple databases on their respective instances.


If i go manually and use standard reports, i am able to achieve this. But is there a way i can get such a report send via e-mail confirming the status of Log shipping configured amongst various DB;s in an instance.


Thanks Kapil





Oracle 12G Merge subpartition data into partition


I have a Oracle 12G table which is interval partitioned (monthly) and list sub-partitioned. There is no longer a requirement to sub-partition the table and hence I would like to "merge" the data in the sub-partitions into the corresponding monthly partition.


I do not want to "drop" the sub-partition and loose the data within it.


I am looking for an alternative to simply copying the data to a new partitioned table and renaming new to old





lundi 29 décembre 2014

How to set ADR Base directory correctly on OracleDB on Linux?


Installed Oracle Database on Linux based OS. Listener is ok. But Oracle is not. (I changed all angular brackets to vertical, because parser not shows angular brackets)



ORA-48108: invalid value given for the diagnostic_dest init.ora parameter

ORA-48140: the specified ADR Base directory does not exist [/volumeX/app/oracle/product/11.2.0/xe/dbs/|ORACLE_BASE|]

ORA-48187: specified directory does not exist



Thats line in initXE.ora and init.ora:



diagnostic_dest='|ORACLE_BASE|'



Result of echo $ORACLE_BASE:



/u01/app/oracle



u01 is a symlink of volumeX directory.


How to solve this problem?


Thank You for Your time.





SSIS Dataflow Task issue, moving data from SQL Server 2008 to MySQL via ADO.Net


Can you please help me in getting this problem solved..


I'm trying to build a SSIS package where I'm moving the data from SQL Server 2008 to MySQL. However I'm getting a weird problem in dataflow task.


It is getting failed when the length of data in a column (varchar) exceeds the length 32chars.. I'm really confused with this message..


SSIS execution results doesn't show any proper message for the failure.


[ADO NET Destination [329]] Error: An exception has occurred during data insertion, the message returned from the provider is:


[SSIS.Pipeline] Error: SSIS Error Code DTS_E_PROCESSINPUTFAILED. The ProcessInput method on component "ADO NET Destination" (329) failed with error code 0xC020844B while processing input "ADO NET Destination Input" (332). The identified component returned an error from the ProcessInput method. The error is specific to the component,but the error is fatal and will cause the Data Flow task to stop running. There may be error messages posted before this with more information about the failure.


Please help...





How i can solve ORA-03113 end-of-file on communication channel Process ID :4000 Session ID: 191 serial number :5


Look in the alert log ane please tell me the solution for it..


Current log# 2 seq# 30254 mem# 0: /u01/app/oracle/oradata/localhost/redo02.log Thread 1 advanced to log sequence 30255 (LGWR switch) Current log# 3 seq# 30255 mem# 0: /u01/app/oracle/oradata/localhost/redo03.log Tue Dec 30 08:05:50 2014 Thread 1 cannot allocate new log, sequence 30256 Checkpoint not complete Current log# 3 seq# 30255 mem# 0: /u01/app/oracle/oradata/localhost/redo03.log Thread 1 advanced to log sequence 30256 (LGWR switch) Current log# 1 seq# 30256 mem# 0: /u01/app/oracle/oradata/localhost/redo01.log Tue Dec 30 08:06:41 2014 Thread 1 cannot allocate new log, sequence 30257 Checkpoint not complete Current log# 1 seq# 30256 mem# 0: /u01/app/oracle/oradata/localhost/redo01.log Thread 1 advanced to log sequence 30257 (LGWR switch) Current log# 2 seq# 30257 mem# 0: /u01/app/oracle/oradata/localhost/redo02.log Tue Dec 30 08:07:33 2014 Thread 1 cannot allocate new log, sequence 30258 Checkpoint not complete Current log# 2 seq# 30257 mem# 0: /u01/app/oracle/oradata/localhost/redo02.log Thread 1 advanced to log sequence 30258 (LGWR switch) Current log# 3 seq# 30258 mem# 0: /u01/app/oracle/oradata/localhost/redo03.log Tue Dec 30 08:08:24 2014 Thread 1 cannot allocate new log, sequence 30259 Checkpoint not complete Current log# 3 seq# 30258 mem# 0: /u01/app/oracle/oradata/localhost/redo03.log Thread 1 advanced to log sequence 30259 (LGWR switch) Current log# 1 seq# 30259 mem# 0: /u01/app/oracle/oradata/localhost/redo01.log Tue Dec 30 08:09:15 2014 Thread 1 cannot allocate new log, sequence 30260 Checkpoint not complete Current log# 1 seq# 30259 mem# 0: /u01/app/oracle/oradata/localhost/redo01.log Thread 1 advanced to log sequence 30260 (LGWR switch) Current log# 2 seq# 30260 mem# 0: /u01/app/oracle/oradata/localhost/redo02.log Tue Dec 30 08:10:06 2014 Thread 1 cannot allocate new log, sequence 30261 Checkpoint not complete Current log# 2 seq# 30260 mem# 0: /u01/app/oracle/oradata/localhost/redo02.log Thread 1 advanced to log sequence 30261 (LGWR switch) Current log# 3 seq# 30261 mem# 0: /u01/app/oracle/oradata/localhost/redo03.log Tue Dec 30 08:10:54 2014 Thread 1 cannot allocate new log, sequence 30262 Checkpoint not complete Current log# 3 seq# 30261 mem# 0: /u01/app/oracle/oradata/localhost/redo03.log Thread 1 advanced to log sequence 30262 (LGWR switch) Current log# 1 seq# 30262 mem# 0: /u01/app/oracle/oradata/localhost/redo01.log Tue Dec 30 08:11:45 2014 Thread 1 cannot allocate new log, sequence 30263 Checkpoint not complete Current log# 1 seq# 30262 mem# 0: /u01/app/oracle/oradata/localhost/redo01.log Thread 1 advanced to log sequence 30263 (LGWR switch) Current log# 2 seq# 30263 mem# 0: /u01/app/oracle/oradata/localhost/redo02.log Tue Dec 30 08:12:33 2014 Thread 1 cannot allocate new log, sequence 30264 Checkpoint not complete Current log# 2 seq# 30263 mem# 0: /u01/app/oracle/oradata/localhost/redo02.log Thread 1 advanced to log sequence 30264 (LGWR switch) Current log# 3 seq# 30264 mem# 0: /u01/app/oracle/oradata/localhost/redo03.log Tue Dec 30 08:13:24 2014 Thread 1 cannot allocate new log, sequence 30265 Checkpoint not complete Current log# 3 seq# 30264 mem# 0: /u01/app/oracle/oradata/localhost/redo03.log Thread 1 advanced to log sequence 30265 (LGWR switch) Current log# 1 seq# 30265 mem# 0: /u01/app/oracle/oradata/localhost/redo01.log Tue Dec 30 08:14:15 2014 Thread 1 advanced to log sequence 30266 (LGWR switch) Current log# 2 seq# 30266 mem# 0: /u01/app/oracle/oradata/localhost/redo02.log Tue Dec 30 08:15:00 2014 Thread 1 cannot allocate new log, sequence 30267 Checkpoint not complete Current log# 2 seq# 30266 mem# 0: /u01/app/oracle/oradata/localhost/redo02.log Thread 1 advanced to log sequence 30267 (LGWR switch) Current log# 3 seq# 30267 mem# 0: /u01/app/oracle/oradata/localhost/redo03.log Tue Dec 30 08:15:46 2014 Thread 1 cannot allocate new log, sequence 30268 Checkpoint not complete Current log# 3 seq# 30267 mem# 0: /u01/app/oracle/oradata/localhost/redo03.log Thread 1 advanced to log sequence 30268 (LGWR switch) Current log# 1 seq# 30268 mem# 0: /u01/app/oracle/oradata/localhost/redo01.log Thread 1 cannot allocate new log, sequence 30269 Checkpoint not complete Current log# 1 seq# 30268 mem# 0: /u01/app/oracle/oradata/localhost/redo01.log Tue Dec 30 08:16:03 2014 Thread 1 advanced to log sequence 30269 (LGWR switch) Current log# 2 seq# 30269 mem# 0: /u01/app/oracle/oradata/localhost/redo02.log Thread 1 cannot allocate new log, sequence 30270 Checkpoint not complete Current log# 2 seq# 30269 mem# 0: /u01/app/oracle/oradata/localhost/redo02.log Thread 1 advanced to log sequence 30270 (LGWR switch) Current log# 3 seq# 30270 mem# 0: /u01/app/oracle/oradata/localhost/redo03.log Thread 1 cannot allocate new log, sequence 30271 Checkpoint not complete Current log# 3 seq# 30270 mem# 0: /u01/app/oracle/oradata/localhost/redo03.log Tue Dec 30 08:16:28 2014 Thread 1 advanced to log sequence 30271 (LGWR switch) Current log# 1 seq# 30271 mem# 0: /u01/app/oracle/oradata/localhost/redo01.log Thread 1 cannot allocate new log, sequence 30272 Checkpoint not complete Current log# 1 seq# 30271 mem# 0: /u01/app/oracle/oradata/localhost/redo01.log Thread 1 advanced to log sequence 30272 (LGWR switch) Current log# 2 seq# 30272 mem# 0: /u01/app/oracle/oradata/localhost/redo02.log Thread 1 cannot allocate new log, sequence 30273 Checkpoint not complete Current log# 2 seq# 30272 mem# 0: /u01/app/oracle/oradata/localhost/redo02.log Tue Dec 30 08:16:46 2014 Thread 1 advanced to log sequence 30273 (LGWR switch) Current log# 3 seq# 30273 mem# 0: /u01/app/oracle/oradata/localhost/redo03.log Thread 1 cannot allocate new log, sequence 30274 Checkpoint not complete Current log# 3 seq# 30273 mem# 0: /u01/app/oracle/oradata/localhost/redo03.log Tue Dec 30 08:18:16 2014 Thread 1 advanced to log sequence 30274 (LGWR switch) Current log# 1 seq# 30274 mem# 0: /u01/app/oracle/oradata/localhost/redo01.log Tue Dec 30 08:19:23 2014 Thread 1 cannot allocate new log, sequence 30275 Checkpoint not complete Current log# 1 seq# 30274 mem# 0: /u01/app/oracle/oradata/localhost/redo01.log Tue Dec 30 08:19:36 2014 Thread 1 advanced to log sequence 30275 (LGWR switch) Current log# 2 seq# 30275 mem# 0: /u01/app/oracle/oradata/localhost/redo02.log Tue Dec 30 08:19:55 2014 Thread 1 cannot allocate new log, sequence 30276 Checkpoint not complete Current log# 2 seq# 30275 mem# 0: /u01/app/oracle/oradata/localhost/redo02.log Tue Dec 30 08:20:15 2014 Thread 1 advanced to log sequence 30276 (LGWR switch) Current log# 3 seq# 30276 mem# 0: /u01/app/oracle/oradata/localhost/redo03.log Tue Dec 30 08:22:59 2014 Thread 1 cannot allocate new log, sequence 30277 Checkpoint not complete Current log# 3 seq# 30276 mem# 0: /u01/app/oracle/oradata/localhost/redo03.log Tue Dec 30 08:23:29 2014 Thread 1 advanced to log sequence 30277 (LGWR switch) Current log# 1 seq# 30277 mem# 0: /u01/app/oracle/oradata/localhost/redo01.log Tue Dec 30 08:23:45 2014 Thread 1 cannot allocate new log, sequence 30278 Checkpoint not complete Current log# 1 seq# 30277 mem# 0: /u01/app/oracle/oradata/localhost/redo01.log Tue Dec 30 08:25:21 2014 Thread 1 advanced to log sequence 30278 (LGWR switch) Current log# 2 seq# 30278 mem# 0: /u01/app/oracle/oradata/localhost/redo02.log Tue Dec 30 08:25:50 2014 Thread 1 cannot allocate new log, sequence 30279 Checkpoint not complete Current log# 2 seq# 30278 mem# 0: /u01/app/oracle/oradata/localhost/redo02.log Tue Dec 30 08:27:56 2014 Thread 1 advanced to log sequence 30279 (LGWR switch) Current log# 3 seq# 30279 mem# 0: /u01/app/oracle/oradata/localhost/redo03.log Tue Dec 30 08:28:47 2014 Thread 1 cannot allocate new log, sequence 30280 Checkpoint not complete Current log# 3 seq# 30279 mem# 0: /u01/app/oracle/oradata/localhost/redo03.log Tue Dec 30 08:30:29 2014 Thread 1 advanced to log sequence 30280 (LGWR switch) Current log# 1 seq# 30280 mem# 0: /u01/app/oracle/oradata/localhost/redo01.log Tue Dec 30 08:31:12 2014 Thread 1 cannot allocate new log, sequence 30281 Checkpoint not complete Current log# 1 seq# 30280 mem# 0: /u01/app/oracle/oradata/localhost/redo01.log Tue Dec 30 08:32:30 2014 Thread 1 advanced to log sequence 30281 (LGWR switch) Current log# 2 seq# 30281 mem# 0: /u01/app/oracle/oradata/localhost/redo02.log Tue Dec 30 08:34:16 2014 Thread 1 cannot allocate new log, sequence 30282 Checkpoint not complete Current log# 2 seq# 30281 mem# 0: /u01/app/oracle/oradata/localhost/redo02.log Tue Dec 30 08:35:58 2014 Thread 1 advanced to log sequence 30282 (LGWR switch) Current log# 3 seq# 30282 mem# 0: /u01/app/oracle/oradata/localhost/redo03.log Tue Dec 30 08:36:24 2014 Thread 1 cannot allocate new log, sequence 30283 Checkpoint not complete Current log# 3 seq# 30282 mem# 0: /u01/app/oracle/oradata/localhost/redo03.log Tue Dec 30 08:38:01 2014 Thread 1 advanced to log sequence 30283 (LGWR switch) Current log# 1 seq# 30283 mem# 0: /u01/app/oracle/oradata/localhost/redo01.log Tue Dec 30 08:42:20 2014 Thread 1 advanced to log sequence 30284 (LGWR switch) Current log# 2 seq# 30284 mem# 0: /u01/app/oracle/oradata/localhost/redo02.log Tue Dec 30 08:43:08 2014 Thread 1 advanced to log sequence 30285 (LGWR switch) Current log# 3 seq# 30285 mem# 0: /u01/app/oracle/oradata/localhost/redo03.log Tue Dec 30 08:43:59 2014 Thread 1 cannot allocate new log, sequence 30286 Checkpoint not complete Current log# 3 seq# 30285 mem# 0: /u01/app/oracle/oradata/localhost/redo03.log Thread 1 advanced to log sequence 30286 (LGWR switch) Current log# 1 seq# 30286 mem# 0: /u01/app/oracle/oradata/localhost/redo01.log Tue Dec 30 08:44:50 2014 Thread 1 cannot allocate new log, sequence 30287 Checkpoint not complete Current log# 1 seq# 30286 mem# 0: /u01/app/oracle/oradata/localhost/redo01.log Thread 1 advanced to log sequence 30287 (LGWR switch) Current log# 2 seq# 30287 mem# 0: /u01/app/oracle/oradata/localhost/redo02.log Tue Dec 30 08:45:41 2014 Thread 1 cannot allocate new log, sequence 30288 Checkpoint not complete Current log# 2 seq# 30287 mem# 0: /u01/app/oracle/oradata/localhost/redo02.log Thread 1 advanced to log sequence 30288 (LGWR switch) Current log# 3 seq# 30288 mem# 0: /u01/app/oracle/oradata/localhost/redo03.log Tue Dec 30 08:46:32 2014 Thread 1 cannot allocate new log, sequence 30289 Checkpoint not complete Current log# 3 seq# 30288 mem# 0: /u01/app/oracle/oradata/localhost/redo03.log Thread 1 advanced to log sequence 30289 (LGWR switch) Current log# 1 seq# 30289 mem# 0: /u01/app/oracle/oradata/localhost/redo01.log Tue Dec 30 08:47:23 2014 Thread 1 cannot allocate new log, sequence 30290 Checkpoint not complete Current log# 1 seq# 30289 mem# 0: /u01/app/oracle/oradata/localhost/redo01.log Thread 1 advanced to log sequence 30290 (LGWR switch) Current log# 2 seq# 30290 mem# 0: /u01/app/oracle/oradata/localhost/redo02.log Tue Dec 30 08:48:14 2014 Thread 1 cannot allocate new log, sequence 30291 Checkpoint not complete Current log# 2 seq# 30290 mem# 0: /u01/app/oracle/oradata/localhost/redo02.log Thread 1 advanced to log sequence 30291 (LGWR switch) Current log# 3 seq# 30291 mem# 0: /u01/app/oracle/oradata/localhost/redo03.log Tue Dec 30 08:49:05 2014 Thread 1 cannot allocate new log, sequence 30292 Checkpoint not complete Current log# 3 seq# 30291 mem# 0: /u01/app/oracle/oradata/localhost/redo03.log Thread 1 advanced to log sequence 30292 (LGWR switch) Current log# 1 seq# 30292 mem# 0: /u01/app/oracle/oradata/localhost/redo01.log Tue Dec 30 08:49:56 2014 Thread 1 cannot allocate new log, sequence 30293 Checkpoint not complete Current log# 1 seq# 30292 mem# 0: /u01/app/oracle/oradata/localhost/redo01.log Thread 1 advanced to log sequence 30293 (LGWR switch) Current log# 2 seq# 30293 mem# 0: /u01/app/oracle/oradata/localhost/redo02.log Tue Dec 30 08:50:50 2014 Thread 1 cannot allocate new log, sequence 30294 Checkpoint not complete Current log# 2 seq# 30293 mem# 0: /u01/app/oracle/oradata/localhost/redo02.log Thread 1 advanced to log sequence 30294 (LGWR switch) Current log# 3 seq# 30294 mem# 0: /u01/app/oracle/oradata/localhost/redo03.log Tue Dec 30 08:51:38 2014 Thread 1 cannot allocate new log, sequence 30295 Checkpoint not complete Current log# 3 seq# 30294 mem# 0: /u01/app/oracle/oradata/localhost/redo03.log Thread 1 advanced to log sequence 30295 (LGWR switch) Current log# 1 seq# 30295 mem# 0: /u01/app/oracle/oradata/localhost/redo01.log Tue Dec 30 08:52:26 2014 Thread 1 cannot allocate new log, sequence 30296 Checkpoint not complete Current log# 1 seq# 30295 mem# 0: /u01/app/oracle/oradata/localhost/redo01.log Thread 1 advanced to log sequence 30296 (LGWR switch) Current log# 2 seq# 30296 mem# 0: /u01/app/oracle/oradata/localhost/redo02.log Tue Dec 30 08:53:14 2014 Thread 1 cannot allocate new log, sequence 30297 Checkpoint not complete Current log# 2 seq# 30296 mem# 0: /u01/app/oracle/oradata/localhost/redo02.log Thread 1 advanced to log sequence 30297 (LGWR switch) Current log# 3 seq# 30297 mem# 0: /u01/app/oracle/oradata/localhost/redo03.log Tue Dec 30 08:54:02 2014 Thread 1 cannot allocate new log, sequence 30298 Checkpoint not complete Current log# 3 seq# 30297 mem# 0: /u01/app/oracle/oradata/localhost/redo03.log Thread 1 advanced to log sequence 30298 (LGWR switch) Current log# 1 seq# 30298 mem# 0: /u01/app/oracle/oradata/localhost/redo01.log Tue Dec 30 08:54:50 2014 Thread 1 cannot allocate new log, sequence 30299 Checkpoint not complete Current log# 1 seq# 30298 mem# 0: /u01/app/oracle/oradata/localhost/redo01.log Thread 1 advanced to log sequence 30299 (LGWR switch) Current log# 2 seq# 30299 mem# 0: /u01/app/oracle/oradata/localhost/redo02.log Tue Dec 30 08:55:38 2014 Thread 1 cannot allocate new log, sequence 30300 Checkpoint not complete Current log# 2 seq# 30299 mem# 0: /u01/app/oracle/oradata/localhost/redo02.log Thread 1 advanced to log sequence 30300 (LGWR switch) Current log# 3 seq# 30300 mem# 0: /u01/app/oracle/oradata/localhost/redo03.log Tue Dec 30 09:32:08 2014 Starting ORACLE instance (normal) LICENSE_MAX_SESSION = 0 LICENSE_SESSIONS_WARNING = 0 Picked latch-free SCN scheme 3 Using LOG_ARCHIVE_DEST_1 parameter default value as USE_DB_RECOVERY_FILE_DEST Autotune of undo retention is turned on. IMODE=BR ILAT =27 LICENSE_MAX_USERS = 0 SYS auditing is disabled Starting up: Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - 64bit Production With the Partitioning, OLAP, Data Mining and Real Application Testing options. Using parameter settings in server-side spfile /u01/app/oracle/product/11.2.0/db_1/dbs/spfilelocalhost.ora System parameters with non-default values: processes = 150 memory_target = 1552M control_files = "/u01/app/oracle/oradata/localhost/control01.ctl" control_files = "/u01/app/oracle/flash_recovery_area/localhost/control02.ctl" db_block_size = 8192 compatible = "11.2.0.0.0" db_recovery_file_dest = "/u01/app/oracle/flash_recovery_area" db_recovery_file_dest_size= 3882M undo_tablespace = "UNDOTBS1" remote_login_passwordfile= "EXCLUSIVE" db_domain = "localdomain" dispatchers = "(PROTOCOL=TCP) (SERVICE=localhostXDB)" audit_file_dest = "/u01/app/oracle/admin/localhost/adump" audit_trail = "DB" db_name = "localhos" db_unique_name = "localhost" open_cursors = 300 diagnostic_dest = "/u01/app/oracle" Tue Dec 30 09:32:10 2014 PMON started with pid=2, OS id=3565 Tue Dec 30 09:32:10 2014 VKTM started with pid=3, OS id=3567 at elevated priority Tue Dec 30 09:32:10 2014 GEN0 started with pid=4, OS id=3571 Tue Dec 30 09:32:10 2014 DIAG started with pid=5, OS id=3573 Tue Dec 30 09:32:10 2014 DBRM started with pid=6, OS id=3575 Tue Dec 30 09:32:10 2014 PSP0 started with pid=7, OS id=3577 Tue Dec 30 09:32:10 2014 DIA0 started with pid=8, OS id=3579 VKTM running at (10)millisec precision with DBRM quantum (100)ms Tue Dec 30 09:32:10 2014 MMAN started with pid=9, OS id=3581 Tue Dec 30 09:32:10 2014 DBW0 started with pid=10, OS id=3583 Tue Dec 30 09:32:10 2014 LGWR started with pid=11, OS id=3585 Tue Dec 30 09:32:10 2014 CKPT started with pid=12, OS id=3587 Tue Dec 30 09:32:10 2014 SMON started with pid=13, OS id=3589 Tue Dec 30 09:32:10 2014 RECO started with pid=14, OS id=3591 Tue Dec 30 09:32:10 2014 MMON started with pid=15, OS id=3593 starting up 1 dispatcher(s) for network address '(ADDRESS=(PARTIAL=YES)(PROTOCOL=TCP))'... Tue Dec 30 09:32:10 2014 MMNL started with pid=16, OS id=3595 starting up 1 shared server(s) ... ORACLE_BASE from environment = /u01/app/oracle Tue Dec 30 09:32:11 2014 ALTER DATABASE MOUNT USER (ospid: 3606): terminating the instance Instance terminated by USER, pid = 3606 Tue Dec 30 09:34:56 2014 Starting ORACLE instance (normal) LICENSE_MAX_SESSION = 0 LICENSE_SESSIONS_WARNING = 0 Picked latch-free SCN scheme 3 Using LOG_ARCHIVE_DEST_1 parameter default value as USE_DB_RECOVERY_FILE_DEST Autotune of undo retention is turned on. IMODE=BR ILAT =27 LICENSE_MAX_USERS = 0 SYS auditing is disabled Starting up: Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - 64bit Production With the Partitioning, OLAP, Data Mining and Real Application Testing options. Using parameter settings in server-side spfile /u01/app/oracle/product/11.2.0/db_1/dbs/spfilelocalhost.ora System parameters with non-default values: processes = 150 memory_target = 1552M control_files = "/u01/app/oracle/oradata/localhost/control01.ctl" control_files = "/u01/app/oracle/flash_recovery_area/localhost/control02.ctl" db_block_size = 8192 compatible = "11.2.0.0.0" db_recovery_file_dest = "/u01/app/oracle/flash_recovery_area" db_recovery_file_dest_size= 3882M undo_tablespace = "UNDOTBS1" remote_login_passwordfile= "EXCLUSIVE" db_domain = "localdomain" dispatchers = "(PROTOCOL=TCP) (SERVICE=localhostXDB)" audit_file_dest = "/u01/app/oracle/admin/localhost/adump" audit_trail = "DB" db_name = "localhos" db_unique_name = "localhost" open_cursors = 300 diagnostic_dest = "/u01/app/oracle" Tue Dec 30 09:34:57 2014 PMON started with pid=2, OS id=3710 Tue Dec 30 09:34:57 2014 VKTM started with pid=3, OS id=3712 at elevated priority VKTM running at (10)millisec precision with DBRM quantum (100)ms Tue Dec 30 09:34:57 2014 GEN0 started with pid=4, OS id=3716 Tue Dec 30 09:34:57 2014 DIAG started with pid=5, OS id=3718 Tue Dec 30 09:34:57 2014 DBRM started with pid=6, OS id=3720 Tue Dec 30 09:34:57 2014 PSP0 started with pid=7, OS id=3722 Tue Dec 30 09:34:57 2014 DIA0 started with pid=8, OS id=3724 Tue Dec 30 09:34:57 2014 MMAN started with pid=9, OS id=3726 Tue Dec 30 09:34:57 2014 DBW0 started with pid=10, OS id=3728 Tue Dec 30 09:34:57 2014 LGWR started with pid=11, OS id=3730 Tue Dec 30 09:34:57 2014 CKPT started with pid=12, OS id=3732 Tue Dec 30 09:34:57 2014 SMON started with pid=13, OS id=3734 Tue Dec 30 09:34:57 2014 RECO started with pid=14, OS id=3736 Tue Dec 30 09:34:57 2014 MMON started with pid=15, OS id=3738 starting up 1 dispatcher(s) for network address '(ADDRESS=(PARTIAL=YES)(PROTOCOL=TCP))'... Tue Dec 30 09:34:57 2014 MMNL started with pid=16, OS id=3740 starting up 1 shared server(s) ... ORACLE_BASE from environment = /u01/app/oracle Tue Dec 30 09:34:57 2014 ALTER DATABASE MOUNT USER (ospid: 3746): terminating the instance Instance terminated by USER, pid = 3746 Tue Dec 30 09:40:16 2014 Starting ORACLE instance (normal) LICENSE_MAX_SESSION = 0 LICENSE_SESSIONS_WARNING = 0 Picked latch-free SCN scheme 3 Using LOG_ARCHIVE_DEST_1 parameter default value as USE_DB_RECOVERY_FILE_DEST Autotune of undo retention is turned on. IMODE=BR ILAT =27 LICENSE_MAX_USERS = 0 SYS auditing is disabled Starting up: Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - 64bit Production With the Partitioning, OLAP, Data Mining and Real Application Testing options. Using parameter settings in server-side spfile /u01/app/oracle/product/11.2.0/db_1/dbs/spfilelocalhost.ora System parameters with non-default values: processes = 150 memory_target = 1552M control_files = "/u01/app/oracle/oradata/localhost/control01.ctl" control_files = "/u01/app/oracle/flash_recovery_area/localhost/control02.ctl" db_block_size = 8192 compatible = "11.2.0.0.0" db_recovery_file_dest = "/u01/app/oracle/flash_recovery_area" db_recovery_file_dest_size= 3882M undo_tablespace = "UNDOTBS1" remote_login_passwordfile= "EXCLUSIVE" db_domain = "localdomain" dispatchers = "(PROTOCOL=TCP) (SERVICE=localhostXDB)" audit_file_dest = "/u01/app/oracle/admin/localhost/adump" audit_trail = "DB" db_name = "localhos" db_unique_name = "localhost" open_cursors = 300 diagnostic_dest = "/u01/app/oracle" Tue Dec 30 09:40:21 2014 PMON started with pid=2, OS id=3479 Tue Dec 30 09:40:21 2014 VKTM started with pid=3, OS id=3481 at elevated priority Tue Dec 30 09:40:21 2014 GEN0 started with pid=4, OS id=3485 VKTM running at (10)millisec precision with DBRM quantum (100)ms Tue Dec 30 09:40:21 2014 DIAG started with pid=5, OS id=3487 Tue Dec 30 09:40:21 2014 DBRM started with pid=6, OS id=3489 Tue Dec 30 09:40:21 2014 PSP0 started with pid=7, OS id=3491 Tue Dec 30 09:40:21 2014 DIA0 started with pid=8, OS id=3493 Tue Dec 30 09:40:21 2014 MMAN started with pid=9, OS id=3495 Tue Dec 30 09:40:21 2014 DBW0 started with pid=10, OS id=3497 Tue Dec 30 09:40:21 2014 LGWR started with pid=11, OS id=3499 Tue Dec 30 09:40:21 2014 CKPT started with pid=12, OS id=3501 Tue Dec 30 09:40:21 2014 SMON started with pid=13, OS id=3503 Tue Dec 30 09:40:21 2014 RECO started with pid=14, OS id=3505 Tue Dec 30 09:40:21 2014 MMON started with pid=15, OS id=3507 starting up 1 dispatcher(s) for network address '(ADDRESS=(PARTIAL=YES)(PROTOCOL=TCP))'... Tue Dec 30 09:40:21 2014 MMNL started with pid=16, OS id=3509 starting up 1 shared server(s) ... ORACLE_BASE from environment = /u01/app/oracle Tue Dec 30 09:40:22 2014 ALTER DATABASE MOUNT USER (ospid: 3530): terminating the instance Instance terminated by USER, pid = 3530 Tue Dec 30 10:07:04 2014 Starting ORACLE instance (normal) LICENSE_MAX_SESSION = 0 LICENSE_SESSIONS_WARNING = 0 Picked latch-free SCN scheme 3 Using LOG_ARCHIVE_DEST_1 parameter default value as USE_DB_RECOVERY_FILE_DEST Autotune of undo retention is turned on. IMODE=BR ILAT =27 LICENSE_MAX_USERS = 0 SYS auditing is disabled Starting up: Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - 64bit Production With the Partitioning, OLAP, Data Mining and Real Application Testing options. Using parameter settings in server-side spfile /u01/app/oracle/product/11.2.0/db_1/dbs/spfilelocalhost.ora System parameters with non-default values: processes = 150 memory_target = 1552M control_files = "/u01/app/oracle/oradata/localhost/control01.ctl" control_files = "/u01/app/oracle/flash_recovery_area/localhost/control02.ctl" db_block_size = 8192 compatible = "11.2.0.0.0" db_recovery_file_dest = "/u01/app/oracle/flash_recovery_area" db_recovery_file_dest_size= 3882M undo_tablespace = "UNDOTBS1" remote_login_passwordfile= "EXCLUSIVE" db_domain = "localdomain" dispatchers = "(PROTOCOL=TCP) (SERVICE=localhostXDB)" audit_file_dest = "/u01/app/oracle/admin/localhost/adump" audit_trail = "DB" db_name = "localhos" db_unique_name = "localhost" open_cursors = 300 diagnostic_dest = "/u01/app/oracle" Tue Dec 30 10:07:05 2014 PMON started with pid=2, OS id=3794 Tue Dec 30 10:07:05 2014 VKTM started with pid=3, OS id=3796 at elevated priority VKTM running at (10)millisec precision with DBRM quantum (100)ms Tue Dec 30 10:07:05 2014 GEN0 started with pid=4, OS id=3800 Tue Dec 30 10:07:05 2014 DIAG started with pid=5, OS id=3802 Tue Dec 30 10:07:05 2014 DBRM started with pid=6, OS id=3804 Tue Dec 30 10:07:05 2014 PSP0 started with pid=7, OS id=3806 Tue Dec 30 10:07:05 2014 DIA0 started with pid=8, OS id=3808 Tue Dec 30 10:07:05 2014 MMAN started with pid=9, OS id=3810 Tue Dec 30 10:07:05 2014 DBW0 started with pid=10, OS id=3812 Tue Dec 30 10:07:05 2014 LGWR started with pid=11, OS id=3814 Tue Dec 30 10:07:05 2014 CKPT started with pid=12, OS id=3816 Tue Dec 30 10:07:05 2014 SMON started with pid=13, OS id=3818 Tue Dec 30 10:07:05 2014 RECO started with pid=14, OS id=3820 Tue Dec 30 10:07:05 2014 MMON started with pid=15, OS id=3822 starting up 1 dispatcher(s) for network address '(ADDRESS=(PARTIAL=YES)(PROTOCOL=TCP))'... Tue Dec 30 10:07:05 2014 MMNL started with pid=16, OS id=3824 starting up 1 shared server(s) ... ORACLE_BASE from environment = /u01/app/oracle Tue Dec 30 10:07:05 2014 ALTER DATABASE MOUNT USER (ospid: 3830): terminating the instance Instance terminated by USER, pid = 3830 Tue Dec 30 10:11:32 2014 Starting ORACLE instance (normal) LICENSE_MAX_SESSION = 0 LICENSE_SESSIONS_WARNING = 0 Picked latch-free SCN scheme 3 Using LOG_ARCHIVE_DEST_1 parameter default value as USE_DB_RECOVERY_FILE_DEST Autotune of undo retention is turned on. IMODE=BR ILAT =27 LICENSE_MAX_USERS = 0 SYS auditing is disabled Starting up: Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - 64bit Production With the Partitioning, OLAP, Data Mining and Real Application Testing options. Using parameter settings in server-side spfile /u01/app/oracle/product/11.2.0/db_1/dbs/spfilelocalhost.ora System parameters with non-default values: processes = 150 memory_target = 1552M control_files = "/u01/app/oracle/oradata/localhost/control01.ctl" control_files = "/u01/app/oracle/flash_recovery_area/localhost/control02.ctl" db_block_size = 8192 compatible = "11.2.0.0.0" db_recovery_file_dest = "/u01/app/oracle/flash_recovery_area" db_recovery_file_dest_size= 3882M undo_tablespace = "UNDOTBS1" remote_login_passwordfile= "EXCLUSIVE" db_domain = "localdomain" dispatchers = "(PROTOCOL=TCP) (SERVICE=localhostXDB)" audit_file_dest = "/u01/app/oracle/admin/localhost/adump" audit_trail = "DB" db_name = "localhos" db_unique_name = "localhost" open_cursors = 300 diagnostic_dest = "/u01/app/oracle" Tue Dec 30 10:11:33 2014 PMON started with pid=2, OS id=3964 Tue Dec 30 10:11:33 2014 VKTM started with pid=3, OS id=3966 at elevated priority VKTM running at (10)millisec precision with DBRM quantum (100)ms Tue Dec 30 10:11:33 2014 GEN0 started with pid=4, OS id=3970 Tue Dec 30 10:11:33 2014 DIAG started with pid=5, OS id=3972 Tue Dec 30 10:11:33 2014 DBRM started with pid=6, OS id=3974 Tue Dec 30 10:11:33 2014 PSP0 started with pid=7, OS id=3976 Tue Dec 30 10:11:33 2014 DIA0 started with pid=8, OS id=3978 Tue Dec 30 10:11:33 2014 MMAN started with pid=9, OS id=3980 Tue Dec 30 10:11:33 2014 DBW0 started with pid=10, OS id=3982 Tue Dec 30 10:11:33 2014 LGWR started with pid=11, OS id=3984 Tue Dec 30 10:11:33 2014 CKPT started with pid=12, OS id=3986 Tue Dec 30 10:11:33 2014 SMON started with pid=13, OS id=3988 Tue Dec 30 10:11:33 2014 RECO started with pid=14, OS id=3990 Tue Dec 30 10:11:33 2014 MMON started with pid=15, OS id=3992 starting up 1 dispatcher(s) for network address '(ADDRESS=(PARTIAL=YES)(PROTOCOL=TCP))'... Tue Dec 30 10:11:33 2014 MMNL started with pid=16, OS id=3994 starting up 1 shared server(s) ... ORACLE_BASE from environment = /u01/app/oracle Tue Dec 30 10:11:33 2014 ALTER DATABASE MOUNT USER (ospid: 4000): terminating the instance Instance terminated by USER, pid = 4000





Index rebuild log growth


How much log index rebuild will generate.I remebered reading rebuild index should generate same amount of log file as table size.But my tests are showing otherwise.We need this estimate since we are trying to build azure database indexes and it has a limitation of max 2 GB.


My database is in full recovery model.


Table Size:


enter image description here


log size:


enter image description here


from the pictures you can see log generation was very less with online index rebuild operation.Can some one correct me if i am missing anything





Postgresql Rename Table without Updating View Definitions


I've found out today that renaming tables in Postgresql also renames the VIEW definitions for Views that use the table I renamed automatically? Is there any way to turn this off?


So I rename the table with:



ALTER TABLE xyz RENAME TO abc;


And my VIEW defition for the sake of example is:



CREATE VIEW foo AS SELECT * FROM xyz;


Next thing I know after I rename the table, the definition for foo has changed to point to table abc.


I'm using Postgresql 8.4





How to measure or find cost of creating a query plan?


I have a typical case where parameter sniffing causes a "bad" execution plan to land in the plan cache, causing subsequent executions of my stored procedure to be very slow. I can "solve" this problem with local variables, OPTIMIZE FOR ... UNKNOWN, and OPTION(RECOMPILE). However, I can also dive into the query and try to optimize it.


I'm trying to determine whether I should: given limited time to fix problems I would like to know the cost of not doing it. As I see it, if I just stick with OPTION(RECOMPILE), the net effect is that a query plan is recreated every time the query is run. So, I think I need to know:


How to find out what the cost of creating a query plan is?


To answer my own question, I've Googled (e.g. with this query), and I've gone through the documentation of columns for the dm_exec_query_stats DMV. I've also inspected the output window in SSMS for "Actual Query Plan" to find this info. Finally, I've searched DBA.SE. None of those led to an answer.


Can anyone tell me? Is it possible ot find or measure time needed for plan creation?





cannot Sql server agent Error 1053


Few months Back I have installed Sql Server 2008R2 in server,At the time of installation


1.Mixed Authen 2.Sql agent User name As NT Authority\NetworkService And same User name for all Other components Also.


At that time Every thing is soft But Now wen I start the Agent I am Getting this below error.


Windows could not start the SQL Server Agent (MSSQLSERVER) service on Local Computer. Error 1053: The service did not respond to the start or control request in a timely fashion.


http://support.microsoft.com/kb/322140


I followed this link Steps are not clear but tried few,Wen i run by using cmd as admin,i got below error.


The program can't start because sqlncli10.dll is missing from your computer. Try reinstalling the program to fix this problem.


Please any One help Me with easy steps..


Thanks in advance..


Prasad





SQL Server Trace Flag 3688 is not working on SQL 2014?


As we are using some third party tools that fills up SQL logs with entries like :


SQL Trace ID 1 was started by login "sa". SQL Trace ID 2 was started by login "XX\xxxxx". SQL Trace stopped. Trace ID = '2'. Login Name = 'XX\xxxxx'. SQL Trace ID 2 was started by login "XX\xxxxx".


So I am trying to suppress these Trace information by enabling TF 3688 but it is not working on SQL 2014, SQL 2012 and SQL 2008R2 Below I am giving snapshot of all the configuration..Please let me know if I am missing anything.


enter image description here


After setting this, I have restarted SQL instance and verified that Trace is running fine by DBCC TRACESTATUS command:


enter image description here


After everything is set, still now log is getting filled with all these Trace related logs.


Please Help !!


Thanks in Advance, Subhro Saha





phpmyadmin recommended indexes


I'm looking for a tool of some sort that allows PHPMyAdmin to recommended indexes based on the queries that are happening.


I've found the Advisor tab in PHPMyAdmin that shows recommendations that I index certain rows, but it doesn't show which queries that I should index.


Does anyone know of a place in PHPMyAdmin that will show which queries I should index, or know of a tool that can help with out with that?


Thanks.





mysql ERROR 1248: Every derived table must have its own alias



-- --------------------------------------------------------------------------------
-- Routine DDL
-- Note: comments before and after the routine body will not be stored by the server
-- --------------------------------------------------------------------------------
DELIMITER $$

CREATE DEFINER=`root`@`localhost` PROCEDURE `usp_MissingBooks`(p_UsrID BIGINT ,
p_Product VARCHAR(20))
BEGIN


DECLARE p_LinkedBoard INT;
DECLARE p_LinkedSubjects INT;
DECLARE p_LinkedClasses INT;
DECLARE p_LinkedBoards INT;
DECLARE p_SchoolBooks INT;

SELECT COUNT(*) INTO p_LinkedBoard
FROM userboards
WHERE userboards.usrid = p_UsrID;
SELECT COUNT(*) INTO p_LinkedClasses
FROM userclasses
WHERE userclasses.UsrID = p_UsrID;
SELECT COUNT(*) INTO p_LinkedSubjects
FROM usersubjects
WHERE usersubjects.usrid = p_UsrID;
SELECT COUNT(*) INTO p_LinkedBoards
FROM schoolboards;
SELECT COUNT(*) INTO p_SchoolBooks
FROM SchoolBooks;
Select * From temp_MatchList;

CREATE TEMPORARY TABLE temp_MatchList(ID BIGINT ,
Year INT ,
Name NVARCHAR(100) ,
SyllabusDisplayName NVARCHAR(100) ,
BookType NVARCHAR(20) ,
ClassName NVARCHAR(20) ,
ClassID INT ,
ClassNumber INT ,
BookName NVARCHAR(100) ,
Board NVARCHAR(100) ,
Publisher NVARCHAR(100) ,
SubjectName NVARCHAR(100) ,
SubjectID INT ,
BoardID INT ,
STATUS CHAR(1) ,
BoardStatus CHAR(1)
);


INSERT INTO temp_MatchList
SELECT * FROM
(
SELECT vBooksDetails.*
FROM
(
SELECT DISTINCT * FROM
(
SELECT
classmaster.ClassID AS ClassID,
CDisplayName,
subjectmaster.id ,
SubjectID,
subjectmaster.DispName ,
BookName
FROM classmaster
JOIN subjectclasslinkage ON
classmaster.ClassID = subjectclasslinkage.ClassNumber
JOIN subjectmaster ON
subjectclasslinkage.SubjectId = subjectmaster.id
LEFT JOIN
(
SELECT * FROM vBooksDetails
WHERE
(
(
(p_LinkedBoard = 0 ) OR (BoardID IN (SELECT boardname
FROM userboards WHERE userboards.usrid = p_UsrID )))
AND ((p_LinkedBoards = 0 ) OR (BoardID IN (SELECT BoardID FROM schoolboards )))
AND EXISTS (SELECT * FROM vBoardInfo WHERE ID = BoardID )AND vBooksDetails.Status = 'A'))
lessonsyllabus ON subjectmaster.id = lessonsyllabus.SubjectID AND
lessonsyllabus.ClassID = classmaster.ClassID ) TAB WHERE TAB.BookName IS NULL ) TAB_MISSING
JOIN vBooksDetails ON TAB_MISSING.ClassID = vBooksDetails.ClassID
AND TAB_MISSING.SubjectID = vBooksDetails.SubjectID AND vBooksDetails.BoardID = 3
WHERE vBooksDetails.Status = 'A' AND ((p_LinkedClasses = 0 )
OR ( vBooksDetails.ClassID IN(SELECT classid FROM userclasses WHERE UsrID = p_UsrID )))
AND (( p_LinkedSubjects = 0 ) OR (vBooksDetails.SubjectID IN (SELECT subjectid FROM usersubjects
WHERE usrid = p_UsrID )))
-- AND ((@LinkedBoard = 0) OR (BoardID IN (SELECT boardname FROM userboards WHERE usrid=@UsrID)) )
AND ((p_SchoolBooks = 0 ) OR ( vBooksDetails.Id NOT IN ( SELECT BookID FROM SchoolBooks))));

END




Get the pais_id not associated to fabricante_distribuidor_id and producto_solicitud_id, the opposite from SELECT query


I have this query for get all the pais_id given a fabricante_distribuidor_id and producto_solicitud_id, the query works fine.



SELECT DISTINCT
nomencladores.pais_fabricante_producto_solicitud.pais_id
FROM
negocio.fabricante_producto_solicitud
INNER JOIN nomencladores.pais_fabricante_producto_solicitud ON (negocio.fabricante_producto_solicitud.id = nomencladores.pais_fabricante_producto_solicitud.fabricante_producto_solicitud_id)
WHERE
negocio.fabricante_producto_solicitud.fabricante_distribuidor_id = 1 AND
negocio.fabricante_producto_solicitud.producto_solicitud_id = 1


But I want to get the opposite, I mean all the pais_id that does not associated to fabricante_distribuidor_id and producto_solicitud_id, how?





Cannot connect to mysql server


Hello I am trying to connect to mysql server from php using PDO.


But I get this error :


An error occured : SQLSTATE[HY000] [2002] Can't connect to local MySQL server through socket '/var/lib/mysql/mysql.sock' (2)


Note that this is not a database I host. I was simply given the username and password to construct the database, create the users etc.



function ConnectToDb()
{
try{
$dns = 'mysql:host=1.1.1.1;dbname=dummyDbName';
$username = 'dummyUser';
$password = 'dummyPassword';
$LINK = new PDO($dns, $username, $password);
$LINK->setAttribute(PDO::ATTR_EMULATE_PREPARES, false);
$LINK->setAttribute(PDO::ATTR_ERRMODE, PDO::ERRMODE_EXCEPTION);
if (!$LINK){
die('Could not connect : ' .mysql_error());
}
else{
return $LINK;
}
} catch (PDOException $ex){
echo "An error occured : " .$ex->getMessage();
}
}


I know that this works on localhost. I'm using it no problem, but as soon as I try to connect to the production database it fails. Anyone has an hint?





Application testing: using multiple users in Oracle instead of mem H2. How to make it as fast as possible?


I decided to stop using H2 for obvious reasons (I have Oracle on production, compatibility mode is a fake). So, I wrote simple test framework which for each test in my application does the following:




  1. Generate random username (in example below it's test_user).




  2. Create new user and tablespace:



    create tablespace test_user_ts
    datafile 'test_user_tabspace.dat'
    size 10M reuse
    autoextend on next 500K;

    create temporary tablespace test_user_ts_tmp
    tempfile 'test_user.tabspace_temp.dat'
    size 10M reuse
    autoextend on next 500K;

    create user test_user
    identified by test_password
    default tablespace test_user_ts
    temporary tablespace test_user_ts_tmp;

    grant create session to test_user;

    grant all privileges to test_user;



  3. Populate database with test data.




  4. Run test.




  5. Clean up:



    drop user test_user cascade;
    drop tablespace test_user_ts_tmp;
    drop tablespace test_user_ts;



The problem is that stages 1-3 are slow. How can I make them as fast as possible? Is there maybe any way of copying existing db schema to another one?


Db version: Oracle 11g


I have full control over Oracle instance. It runs on a vagrant image on my dev machine.





Database system for accounting Saas


I am in the process of building a large scale cloud based SaaS accounting / finance software. So stability & scalability is very important.


Business - protected $id - protected $displayName - protected $contactFirstName - protected $contactLastName - protected $address


Customer (inherited from Business) - protected $customerRegNo - protected $customerRating


Supplier (inherited from Business) - protected $supplierBankAccount (convenient data for paying the supplier)


If I were to translate it into a RDBMS schema, it could become something like the following


APPROACH 1


businesses



  • id

  • displayName

  • contactFirstName

  • contactLastName

  • address


customers



  • id

  • business_id

  • reg_no

  • rating


suppliers



  • id

  • business_id

  • bank_account


APPROACH 2


businesses



  • id

  • displayName

  • contactFirstName

  • contactLastName

  • address

  • customer_reg_no (nullable)

  • customer_rating (nullable)

  • supplier_bank_account (nullable)


So approach 2 doesn't look so natural from the point of OOP, but it is commonly used approach in RDBMS, approach 1 was trying to mimic the OOP principle, but there will be a lot of joining required when performing search which could be expensive.


What are the pros and cons of both approaches, see which one is better suited for the accounting system, which one most people is using and why.


Any NoSQL alternatives and see if they are able to cater for the OOP principle and at the same time doesn't sacrifice performance, and their scalability & stability.


Ultimately I want to build a database system that can be scaled easily by just throwing more machines in the mix, I knew that MySQL is not easy to achieve that and I heard that NoSQL alternatives do better in scaling but not good in stability & consistency.





How to ensure MYSQL imports of .sql files properly retain foreign key relationships, constraints, etc


Recently, we restored a MySQL '*.sql' file on two separate brand new MySQL instances. The MySQL version is 5.5.40 running on CentOS 6.6.


In one instance, everything restored fine. On the other, a foreign key relationship didn't import properly. We found this out when running smoke tests. The following error showed in the application log:


Integrity constraint violation: 1451 Cannot delete or update a parent row: a foreign key constraint fails (prod.table1, CONSTRAINT table1_ibfk_1 FOREIGN KEY (unique_id_1) REFERENCES table2 (id))


Is there a way to proactively confirm after such a restore that all foreign keys and other constraints were properly imported? Obviously application functional testing can reveal this, but there will need to be a lot of tests done to make sure all foreign key relationships were covered, and it's very challenging to define tests that cover every possible relationship. I'm hoping for something more sophisticated.


I know that there are tools to checksum the data in the tables but that's not my goal here. I want to checksum the table relationships. I'm hoping that someone can point me to a utility or set of utilities that will allow a comparison of the database structure in the .sql file to the database itself after restore/import.





Can I grant multiple remote LocalService accounts access to a SQL Server database?


I am designing a multi-environment Windows-based software product that will extensively employ Windows services running as LocalService to perform data access operations. All of the data access endpoints are .NET applications that use Entity Framework to access the databases.


I don't want to use SQL Server authentication if I can avoid it; I want to use Windows authentication everywhere. Currently there are no roles or users at the database level for any of the system's databases. My current plan is to grant SELECT/INSERT/UPDATE/EXECUTE privileges to the LocalService accounts associated with each of the service host environments.


First, is that approach even possible? After a dizzying amount of research, I can't make sense of how SQL Server will distinguish between the LocalService account on two remote environments.


Second, am I missing a serious security implication? I understand the principle that a role should be limited and narrow in scope. I also understand that any other applications on the servers running as LocalService -- including malware -- will have database access using the current plan. The challenge that I'm facing is that the product is licensed and will be deployed on customer site, and I'm not sure how to tell our customers that our software needs to commission its own domain accounts or, even worse, that they need to commission them.


If it makes the most sense, I will just abandon the idea of using LocalService to access databases and plan on creating purpose-specific service accounts. My expertise is stretched beyond its limit for this particular problem. Thanks in advance for sharing your thoughts.





ORA-08102 while creating a view


I am learning PL/SQL programming. I'm trying to create a view with the following statement:



create or replace view viewName as select ID from employees;


The view creation throws the following error:



ORA-00604 error occurred at recursive level 1
ORA-08102 Index not found, obj #39 , file 1 block 61248


How can I fix it?


The problem is with all objects, procedures,functions,views and i am using the HR Database of Oracle 11G for learning purposes.





Can CTE sub-statements be prevented from executing at all?


Through this strategy, time consumed by queries for my application have dropped to extreme lows; however, some queries consume more milliseconds than necessary.


The reason why is that upon examination of EXPLAIN ANALYZE, I found that sub-statements that depend upon the RETURNING of another query will still execute despite the source query returning an empty set.


In my application, I've reduced the number of queries from many 10s to four. All are bulk operations. The ones that have many sub-statements are the ones that consume 5ms or more. It's those I'd like to trim.


I gave functions another chance and tried to manually prevent chain sections by using IF statements. I had to use TEMPORARY TABLEs to store the RETURNING results and check them in the IFs.


I can't explain it, but the time consumed was 2 to 4x that consumed by the equivalent CTE.


To that end, is it possible to completely prevent a CTE sub-statement from executing if a RETURNING that it depends upon has no rows?


Another 50% reduction


I long ago followed the advice here, stripped out my comma joins where I could and replaced them with EXISTS.


Due to my inexperience, I replaced an EXISTS with a comma join in one sub-statement because I thought one query needed it instead. The average time consumed dropped again.


I changed all of the EXISTS back to comma joins, and for queries with empty RETURNINGs, the time was chopped in half or more.


In those cases, the EXISTS go to disk, and it appears as if they still execute despite a previous WHERE condition that would guarantee no writing.


Performance for when the previous RETURNINGs return are the same. The resulting relationships are 1:1, and the tables referenced are collections of integer references to other tables' primary keys. The estimated EXPLAIN ANALYZE costs have gone through the roof.