Skip to content
GitLab
Explore
Sign in
Primary navigation
Search or go to…
Project
S
scikit-learn
Manage
Activity
Members
Labels
Plan
Issues
Issue boards
Milestones
Wiki
Code
Merge requests
Repository
Branches
Commits
Tags
Repository graph
Compare revisions
Snippets
Build
Pipelines
Jobs
Pipeline schedules
Artifacts
Deploy
Releases
Model registry
Operate
Environments
Monitor
Incidents
Analyze
Value stream analytics
Contributor analytics
CI/CD analytics
Repository analytics
Model experiments
Help
Help
Support
GitLab documentation
Compare GitLab plans
Community forum
Contribute to GitLab
Provide feedback
Keyboard shortcuts
?
Snippets
Groups
Projects
Show more breadcrumbs
Ian Johnson
scikit-learn
Commits
c8f09369
Commit
c8f09369
authored
13 years ago
by
Fabian Pedregosa
Browse files
Options
Downloads
Patches
Plain Diff
Use LinearSVC's docstring instead of outdated one.
parent
55babd78
No related branches found
No related tags found
No related merge requests found
Changes
1
Show whitespace changes
Inline
Side-by-side
Showing
1 changed file
sklearn/svm/sparse/classes.py
+4
-52
4 additions, 52 deletions
sklearn/svm/sparse/classes.py
with
4 additions
and
52 deletions
sklearn/svm/sparse/classes.py
+
4
−
52
View file @
c8f09369
...
...
@@ -183,60 +183,12 @@ class LinearSVC(SparseBaseLibLinear, ClassifierMixin,
choice of penalties and loss functions and should be faster for
huge datasets.
Parameters
----------
loss : string,
'
l1
'
or
'
l2
'
(default
'
l2
'
)
Specifies the loss function. With
'
l1
'
it is the standard SVM
loss (a.k.a. hinge Loss) while with
'
l2
'
it is the squared loss.
(a.k.a. squared hinge Loss)
penalty : string,
'
l1
'
or
'
l2
'
(default
'
l2
'
)
Specifies the norm used in the penalization. The
'
l2
'
penalty
is the standard used in SVC. The
'
l1
'
leads to ``coef_``
vectors that are sparse.
C : float, optional (default=1.0)
penalty parameter C of the error term.
dual : bool, (default True)
Select the algorithm to either solve the dual or primal
optimization problem.
intercept_scaling : float, default: 1
when self.fit_intercept is True, instance vector x becomes
[x, self.intercept_scaling],
i.e. a
"
synthetic
"
feature with constant value equals to
intercept_scaling is appended to the instance vector.
The intercept becomes intercept_scaling * synthetic feature weight
Note! the synthetic feature weight is subject to l1/l2 regularization
as all other features.
To lessen the effect of regularization on synthetic feature weight
(and therefore on the intercept) intercept_scaling has to be increased
Attributes
----------
`coef_` : array, shape = [n_features] if n_classes == 2 else [n_classes, n_features]
Wiehgiths asigned to the features (coefficients in the primal
problem). This is only available in the case of linear kernel.
`intercept_` : array, shape = [1] if n_classes == 2 else [n_classes]
constants in decision function
See :class:`sklearn.svm.SVC` for a complete list of parameters
Notes
-----
The underlying C implementation uses a random number generator to
select features when fitting the model. It is thus not uncommon,
to have slightly different results for the same input data. If
that happens, try with a smaller eps parameter.
See also
--------
SVC
References
----------
LIBLINEAR -- A Library for Large Linear Classification
http://www.csie.ntu.edu.tw/~cjlin/liblinear/
For best results, this accepts a matrix in csr format
(scipy.sparse.csr), but should be able to convert from any array-like
object (including other sparse representations).
"""
pass
This diff is collapsed.
Click to expand it.
Preview
0%
Loading
Try again
or
attach a new file
.
Cancel
You are about to add
0
people
to the discussion. Proceed with caution.
Finish editing this message first!
Save comment
Cancel
Please
register
or
sign in
to comment